Home > Catalogue > Magnetic Components > Specialty Transformers> EATON

Specialty Transformers

Results:
Specialty Transformers Results:
Filter Results: -1/44
Comprehensive
Price Priority
Stock Priority
Image
Part Number
Manufacturer
Description
Availability
Unit Price
Quantity
Operation
CTX2-4A-R
EATON
2.18μH,8.7μH SMD,11.4X11.4MM SMD mount 13.97mm*13.97mm*6.35mm
Quantity: 600
Ship Date: 7-12 working days
600+ $2.9432
- +
x $2.9432
Ext. Price: $1765.92
MOQ: 600
Mult: 600
SPQ: 1
CTX15-4A-R
EATON
14.7μH,58.81μH SMD,11.4X11.4MM SMD mount 13.97mm*13.97mm*6.35mm
Quantity: 415
Ship Date: 6-13 working days
1+ $5.773
10+ $5.016
25+ $4.6764
50+ $4.4172
100+ $4.2876
250+ $4.0392
600+ $3.8745
1200+ $3.6435
- +
x $5.773
Ext. Price: $5.77
MOQ: 1
Mult: 1
SPQ: 1
CTX01-15030
EATON
Pulse Transformers XFMR,CONT FLY,25uH 3.10AP
Quantity: 1277
Ship Date: 7-9 working days
11+ $10.5755
49+ $9.8532
97+ $9.3685
484+ $8.9995
- +
x $10.5755
Ext. Price: $116.33
MOQ: 11
Mult: 1
SPQ: 1
CTX68-1P-R
EATON
273.61μH,68.4μH SMD,11.4X11.4MM SMD mount 11.43mm*11.43mm*4.19mm
Quantity: 2273
Ship Date: 6-13 working days
1+ $3.266
1100+ $2.9715
2200+ $2.856
- +
x $3.266
Ext. Price: $3.26
MOQ: 1
Mult: 1
SPQ: 1
DRQ125-471-R
EATON
473.1μH,1.892mH SMD mount 12.5mm*12.5mm*6mm
Quantity: 17
Ship Date: 6-13 working days
1+ $1.5065
10+ $1.242
100+ $0.9418
250+ $0.9364
600+ $0.6772
1200+ $0.6167
2400+ $0.5807
- +
x $1.5065
Ext. Price: $1.50
MOQ: 1
Mult: 1
SPQ: 1
DRQ127-6R8-R
EATON
7.387μH,29.55μH 0127 SMD mount 12.5mm*12.5mm*8mm
Quantity: 2450
Ship Date: 10-15 working days
350+ $0.3629
700+ $0.3596
1050+ $0.3564
- +
x $0.3629
Ext. Price: $127.01
MOQ: 350
Mult: 350
SPQ: 350
CTX68-3-R
EATON
67.42μH SMD,14X14MM SMD mount 13.97mm*13.97mm*4.83mm
Quantity: 678
Ship Date: 6-13 working days
1+ $8.533
10+ $6.622
25+ $5.778
50+ $5.454
100+ $5.0544
250+ $4.9788
500+ $4.011
800+ $3.9795
- +
x $8.533
Ext. Price: $8.53
MOQ: 1
Mult: 1
SPQ: 1
CTX150-2P-R
EATON
148.1μH,592.42μH SMD mount 11.43mm*11.43mm*5.97mm
Quantity: 800
Ship Date: 7-9 working days
800+ $2.4815
- +
x $2.4815
Ext. Price: $1985.20
MOQ: 800
Mult: 800
SPQ: 800
CTX50-4-R
EATON
50.18μH,200.7μH SMD,11.4X11.4MM SMD mount 13.97mm*13.97mm*6.35mm
Quantity: 600
Ship Date: 6-14 working days
600+ $4.7702
- +
x $4.7702
Ext. Price: $2862.12
MOQ: 600
Mult: 600
SPQ: 1
VP2-0066-R
EATON
3.2μH 1:1 12-SMD SMD mount 16.3mm*16.8mm*7.8mm
Quantity: 600
Ship Date: 7-9 working days
300+ $4.6008
- +
x $4.6008
Ext. Price: $1380.24
MOQ: 300
Mult: 300
SPQ: 300
DRQ73-470-R
EATON
48.62μH,194.5μH SMD mount 7.6mm*7.6mm*3.55mm
Quantity: 1350
Ship Date: 6-14 working days
1350+ $0.4953
2700+ $0.4481
4050+ $0.4218
6750+ $0.4138
10800+ $0.4058
- +
x $0.4953
Ext. Price: $668.65
MOQ: 1350
Mult: 1350
SPQ: 1
CTX50-4A-R
EATON
50.11μH,200.4μH SMD,14X14MM SMD mount 13.97mm*13.97mm*6.35mm
Quantity: 1100
Ship Date: 7-9 working days
600+ $2.9084
- +
x $2.9084
Ext. Price: $1745.04
MOQ: 600
Mult: 600
SPQ: 600
DRQ127-4R7-R
EATON
4.841μH,19.36μH SMD mount 12.5mm*12.5mm*8mm
Quantity: 204
Ship Date: 3-12 working days
1+ $1.4272
10+ $1.2054
50+ $1.1572
100+ $1.08
200+ $1.08
350+ $1.08
- +
x $1.4272
Ext. Price: $11.41
MOQ: 8
Mult: 1
DRQ74-100-R
EATON
9.882μH,39.53μH SMD mount 7.6mm*7.6mm*4.45mm
Quantity: 910
Ship Date: 7-9 working days
122+ $0.9266
379+ $0.8694
- +
x $0.9266
Ext. Price: $113.04
MOQ: 122
Mult: 1
SPQ: 1
SDQ25-471-R
EATON
472.4μH,1.8896mH 2020 SMD mount 5.2mm*5.2mm*2.5mm
Quantity: 0
Ship Date: 7-12 working days
2900+ $2.223
- +
x $2.223
Ext. Price: $6446.70
MOQ: 2900
Mult: 2900
SPQ: 1
CTX2-1-R
EATON
2.03μH,8.1μH SMD,11.4X11.4MM SMD mount 11.43mm*11.43mm*4.19mm
Quantity: 0
Ship Date: 6-13 working days
1100+ $3.738
- +
x $3.738
Ext. Price: $4111.80
MOQ: 1100
Mult: 1
SPQ: 1
CTX1-3-R
EATON
860nH SMD,14X14MM SMD mount 13.97mm*13.97mm*4.83mm
Quantity: 0
Ship Date: 6-13 working days
800+ $4.116
- +
x $4.116
Ext. Price: $3292.80
MOQ: 800
Mult: 1
SPQ: 1
CTX8-2-R
EATON
7.94μH SMD mount 11.43mm*11.43mm*5.97mm
Quantity: 0
Ship Date: 7-12 working days
800+ $3.6722
- +
x $3.6722
Ext. Price: $2937.76
MOQ: 800
Mult: 800
SPQ: 1
SDQ12-2R2-R
EATON
2.25μH,9μH 2020 SMD mount 5.2mm*5.2mm*1.19mm
Quantity: 0
Ship Date: 7-12 working days
3800+ $1.872
- +
x $1.872
Ext. Price: $7113.60
MOQ: 3800
Mult: 3800
SPQ: 1
SDQ25-680-R
EATON
69.19μH,276.8μH 2020 SMD mount 5.2mm*5.2mm*2.5mm
Quantity: 0
Ship Date: 7-12 working days
2900+ $2.366
- +
x $2.366
Ext. Price: $6861.40
MOQ: 2900
Mult: 2900
SPQ: 1
CTX1-4-R
EATON
1.23μH SMD,11.4X11.4MM SMD mount 13.97mm*13.97mm*6.35mm
Quantity: 0
Ship Date: 6-13 working days
600+ $4.41
- +
x $4.41
Ext. Price: $2646.00
MOQ: 600
Mult: 1
SPQ: 1
SDQ12-3R3-R
EATON
3.61μH,14.44μH 2020 SMD mount 5.2mm*5.2mm*1.19mm
Quantity: 0
Ship Date: 7-12 working days
3800+ $1.872
- +
x $1.872
Ext. Price: $7113.60
MOQ: 3800
Mult: 3800
SPQ: 1
CTX5-3-R
EATON
4.7μH SMD,14X14MM SMD mount 13.97mm*13.97mm*4.83mm
Quantity: 0
Ship Date: 6-13 working days
800+ $4.116
- +
x $4.116
Ext. Price: $3292.80
MOQ: 800
Mult: 1
SPQ: 1
CTX15-1-R
EATON
14.4μH,57.6μH SMD,11.4X11.4MM SMD mount 11.43mm*11.43mm*4.19mm
Quantity: 0
Ship Date: 6-13 working days
1100+ $3.969
- +
x $3.969
Ext. Price: $4365.89
MOQ: 1100
Mult: 1
SPQ: 1
CTX68-2-R
EATON
67.87μH SMD,11.4X11.4MM SMD mount 11.43mm*11.43mm*5.97mm
Quantity: 0
Ship Date: 7-12 working days
800+ $3.8548
- +
x $3.8548
Ext. Price: $3083.84
MOQ: 800
Mult: 800
SPQ: 1

Specialty Transformers

Other Transformers refers to a class of neural network architectures that extend the capabilities of the original Transformer model, which was introduced in the paper "Attention Is All You Need" by Vaswani et al. in 2017. The original Transformer model revolutionized the field of natural language processing (NLP) with its use of self-attention mechanisms to process sequences of data, such as text or time series.

Definition:
Other Transformers are variations or extensions of the basic Transformer architecture, designed to address specific challenges or to improve performance in various tasks. They often incorporate additional layers, attention mechanisms, or training techniques to enhance the model's capabilities.

Functions:
1. Enhanced Attention Mechanisms: Some Transformers introduce new types of attention, such as multi-head attention, which allows the model to focus on different parts of the input sequence simultaneously.
2. Positional Encoding: To preserve the order of sequence data, positional encodings are added to the input embeddings.
3. Layer Normalization: This technique is used to stabilize the training of deep networks by normalizing the inputs to each layer.
4. Feedforward Networks: Each Transformer layer includes a feedforward neural network that processes the attention outputs.
5. Residual Connections: These connections help in training deeper networks by adding the output of a layer to its input before passing it to the next layer.

Applications:
- Natural Language Understanding (NLU): For tasks like sentiment analysis, question answering, and text classification.
- Machine Translation: To translate text from one language to another.
- Speech Recognition: Transcribing spoken language into written text.
- Time Series Analysis: For forecasting and pattern recognition in sequential data.
- Image Recognition: Some Transformers have been adapted for computer vision tasks.

Selection Criteria:
When choosing an Other Transformer model, consider the following:
1. Task Specificity: The model should be suitable for the specific task at hand, whether it's translation, summarization, or classification.
2. Data Size and Quality: Larger and more diverse datasets may require more complex models.
3. Computational Resources: More sophisticated models require more computational power and memory.
4. Training Time: Complex models may take longer to train.
5. Performance Metrics: Consider the model's performance on benchmarks relevant to your task.
6. Scalability: The model should be able to scale with the size of the data and the complexity of the task.

In summary, Other Transformers are a diverse family of models that build upon the foundational concepts of the original Transformer to address a wide range of challenges in machine learning and artificial intelligence. The choice of a specific model depends on the requirements of the task, the available data, and the computational resources.
Please refer to the product rule book for details.