Home > Catalogue > Magnetic Components > Specialty Transformers> BOURNS

Specialty Transformers

Results:
Specialty Transformers Results:
Filter Results: -1/17
Comprehensive
Price Priority
Stock Priority
Image
Part Number
Manufacturer
Description
Availability
Unit Price
Quantity
Operation
POE070-PD13050S
BOURNS
Bourns Pulse Transformers, 3.33:1turns_ratio, surface mount installation, 100μHPrimary coil inductance
Quantity: 595
Ship Date: 6-13 working days
1+ $5.681
10+ $4.444
25+ $3.619
50+ $3.4452
300+ $3.3495
2400+ $2.562
- +
x $5.681
Ext. Price: $5.68
MOQ: 1
Mult: 1
SPQ: 1
SRF1260-102M
BOURNS
1mH,4mH SMD mount 12.5mm*12.5mm*6.2mm
Quantity: 1618
Ship Date: 5-12 working days
400+ $0.9072
800+ $0.8442
1200+ $0.8274
2800+ $0.7907
5200+ $0.7791
7600+ $0.7739
10000+ $0.7697
15200+ $0.7655
- +
x $0.9072
Ext. Price: $362.88
MOQ: 400
Mult: 400
PM3604-5-RC
BOURNS
5μH,20μH SMD mount 13.97mm*11.48mm*6.73mm
Quantity: 175
Ship Date: 6-13 working days
1+ $1.679
10+ $1.495
25+ $1.309
100+ $1.242
250+ $1.1016
400+ $1.0133
- +
x $1.679
Ext. Price: $1.67
MOQ: 1
Mult: 1
SPQ: 1
SRF1260-1R5Y
BOURNS
1.5μH,6μH SMD mount 12.5mm*12.5mm*6.2mm
Quantity: 400
Ship Date: 7-12 working days
400+ $0.6836
800+ $0.6445
1200+ $0.6228
2000+ $0.5965
2800+ $0.5797
4000+ $0.5625
10000+ $0.538
- +
x $0.6836
Ext. Price: $273.44
MOQ: 400
Mult: 400
SPQ: 1
SRF1280A-820M
BOURNS
82μH,328μH SMD mount 12.5mm*12.5mm*8mm
Quantity: 0
Ship Date: 6-13 working days
400+ $1.05
1600+ $1.05
2400+ $1.0112
- +
x $1.05
Ext. Price: $420.00
MOQ: 400
Mult: 1
SPQ: 1
2-2-1JEL
BOURNS
70KHz~200MHz SMD mount 8.38mm(length)*6.35mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
SM74139E
BOURNS
10KHz~100KHz Through hole mounting 13.46mm(length)*12.32mm(height)
Quantity: 0
Ship Date: 6-13 working days
700+ $5.145
- +
x $5.145
Ext. Price: $3601.50
MOQ: 700
Mult: 1
SPQ: 1
SM75719E
BOURNS
10KHz~100KHz Through hole mounting 13.46mm(length)*12.32mm(height)
Quantity: 0
Ship Date: 6-13 working days
900+ $3.9795
- +
x $3.9795
Ext. Price: $3581.55
MOQ: 900
Mult: 1
SPQ: 1
2-1-6JEL
BOURNS
30KHz~300MHz 1:1 Wide Band RF Transformer SMD mount 7.62mm(length)*8.38mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
SM74346E
BOURNS
10KHz~100KHz Through hole mounting 13.46mm(length)*12.32mm(height)
Quantity: 0
Ship Date: 6-13 working days
700+ $2.52
2800+ $2.457
- +
x $2.52
Ext. Price: $1764.00
MOQ: 700
Mult: 1
SPQ: 1
LM-NP-1001-B1L3
BOURNS
Audio Transformer 1:1 6500VDC 66Ohm Prim. DCR 66Ohm Sec. DCR 8 Terminal PC Pin Thru-Hole
Quantity: 0
Ship Date: 10-15 working days
- +
x $
Ext. Price:
MOQ: 3360
Mult: 1680
SPQ: 1680
2-16-6WL
BOURNS
30KHz~75MHz SMD mount 10.92mm(length)*6.35mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
2-1-6DL
BOURNS
30KHz~300MHz 1:1 Wide Band RF Transformer DIP-6 Through hole mounting 7.62mm(length)*8.83mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
3-4-6WL
BOURNS
20KHz~200MHz 1:2 Wide Band RF Transformer SMD mount 7.62mm(length)*6.35mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
2-1-6WL
BOURNS
30KHz~300MHz 1:1 Wide Band RF Transformer SMD mount 7.62mm(length)*10.92mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
3-1-6WL
BOURNS
10KHz~150MHz 1:1 Wide Band RF Transformer SMD mount 7.62mm(length)*6.35mm(height)
Quantity: 0
Ship Date: 7-12 working days
- +
x $
Ext. Price:
MOQ: 1
Mult: 1
SPQ: 1
SM75225E
BOURNS
10KHz~100KHz Through hole mounting 13.46mm(length)*12.32mm(height)
Quantity: 0
Ship Date: 6-13 working days
330+ $4.326
660+ $4.116
990+ $3.9795
- +
x $4.326
Ext. Price: $1427.58
MOQ: 330
Mult: 1
SPQ: 1

Specialty Transformers

Other Transformers refers to a class of neural network architectures that extend the capabilities of the original Transformer model, which was introduced in the paper "Attention Is All You Need" by Vaswani et al. in 2017. The original Transformer model revolutionized the field of natural language processing (NLP) with its use of self-attention mechanisms to process sequences of data, such as text or time series.

Definition:
Other Transformers are variations or extensions of the basic Transformer architecture, designed to address specific challenges or to improve performance in various tasks. They often incorporate additional layers, attention mechanisms, or training techniques to enhance the model's capabilities.

Functions:
1. Enhanced Attention Mechanisms: Some Transformers introduce new types of attention, such as multi-head attention, which allows the model to focus on different parts of the input sequence simultaneously.
2. Positional Encoding: To preserve the order of sequence data, positional encodings are added to the input embeddings.
3. Layer Normalization: This technique is used to stabilize the training of deep networks by normalizing the inputs to each layer.
4. Feedforward Networks: Each Transformer layer includes a feedforward neural network that processes the attention outputs.
5. Residual Connections: These connections help in training deeper networks by adding the output of a layer to its input before passing it to the next layer.

Applications:
- Natural Language Understanding (NLU): For tasks like sentiment analysis, question answering, and text classification.
- Machine Translation: To translate text from one language to another.
- Speech Recognition: Transcribing spoken language into written text.
- Time Series Analysis: For forecasting and pattern recognition in sequential data.
- Image Recognition: Some Transformers have been adapted for computer vision tasks.

Selection Criteria:
When choosing an Other Transformer model, consider the following:
1. Task Specificity: The model should be suitable for the specific task at hand, whether it's translation, summarization, or classification.
2. Data Size and Quality: Larger and more diverse datasets may require more complex models.
3. Computational Resources: More sophisticated models require more computational power and memory.
4. Training Time: Complex models may take longer to train.
5. Performance Metrics: Consider the model's performance on benchmarks relevant to your task.
6. Scalability: The model should be able to scale with the size of the data and the complexity of the task.

In summary, Other Transformers are a diverse family of models that build upon the foundational concepts of the original Transformer to address a wide range of challenges in machine learning and artificial intelligence. The choice of a specific model depends on the requirements of the task, the available data, and the computational resources.
Please refer to the product rule book for details.