Home > Catalogue > Magnetic Components > Specialty Transformers> TE Connectivity

Specialty Transformers

Results:
Specialty Transformers Results:
Filter Results: -1/13
Comprehensive
Price Priority
Stock Priority
Image
Part Number
Manufacturer
Description
Availability
Unit Price
Quantity
Operation
MGPWT-00059-P
TE Connectivity
Quantity: 1875
Ship Date: 7-9 working days
19+
15+ $3.4815
75+ $3.4206
150+ $3.3597
300+ $3.2988
- +
x $3.4815
Ext. Price: $52.22
MOQ: 15
Mult: 1
SPQ: 1
1879391-1
TE Connectivity
Quantity: 60000
Ship Date: 37 weeks
600+ $3.875
1200+ $3.675
12000+ $3.5375
24000+ $3.3375
60000+ $3.075
- +
x $3.875
Ext. Price: $2325.00
MOQ: 600
Mult: 600
SPQ: 600
C8231-P
TE Connectivity
Quantity: 430000
Ship Date: 25 weeks
4300+ $2.4
43000+ $2.3
86000+ $2.175
215000+ $2.0125
- +
x $2.4
Ext. Price: $10320.00
MOQ: 4300
Mult: 4300
SPQ: 4300
1879402-1
TE Connectivity
Quantity: 430000
Ship Date: 25 weeks
4300+ $2.4
43000+ $2.3
86000+ $2.175
215000+ $2.0125
- +
x $2.4
Ext. Price: $10320.00
MOQ: 4300
Mult: 4300
SPQ: 4300
3-1879385-5
TE Connectivity
EP13 TRANSFORMER DIP Through hole mounting 42mm(length)*13.97mm(height)
Quantity: 3600
Ship Date: 7-9 working days
18+
55+ $2.0097
200+ $1.9082
650+ $1.827
1600+ $1.7458
- +
x $2.0097
Ext. Price: $110.53
MOQ: 55
Mult: 1
SPQ: 1
02561022-000
TE Connectivity
HCT series LVDT 、。
Quantity: 500
Ship Date: 16-20 weeks
5+ $955.8303
10+ $810.6902
25+ $727.5678
- +
x $955.8303
Ext. Price: $4779.15
MOQ: 5
Mult: 5
SPQ: 5
C9423-P
TE Connectivity
1.222:1/0.044:1 1W SMD mount 26.16mm(length)*32.51mm(height)
Quantity: 2025
Ship Date: 7-9 working days
19+
20+ $5.2273
65+ $4.9837
250+ $4.7502
650+ $4.5472
1900+ $4.3544
- +
x $5.2273
Ext. Price: $104.54
MOQ: 20
Mult: 1
SPQ: 1
MGPWT-00453-P
TE Connectivity
Quantity: 50000
Ship Date: 27 weeks
500+ $3.775
1000+ $3.575
10000+ $3.45
20000+ $3.25
50000+ $3
- +
x $3.775
Ext. Price: $1887.50
MOQ: 500
Mult: 500
SPQ: 500
1879395-1
TE Connectivity
1.222:1/0.044:1 SMD mount 26.16mm(length)*32.51mm(height)
Quantity: 2025
Ship Date: 7-9 working days
19+
20+ $5.2273
65+ $4.9837
250+ $4.7502
650+ $4.5472
1900+ $4.3544
- +
x $5.2273
Ext. Price: $104.54
MOQ: 20
Mult: 1
SPQ: 1
MGSP5-00001-P
TE Connectivity
Quantity: 120000
Ship Date: 27 weeks
1200+ $6.8375
12000+ $6.5875
24000+ $6.225
60000+ $5.7375
- +
x $6.8375
Ext. Price: $8205.00
MOQ: 1200
Mult: 1200
SPQ: 1200
1879562-2
TE Connectivity
Quantity: 120000
Ship Date: 27 weeks
1200+ $6.975
12000+ $6.7125
24000+ $6.35
60000+ $5.8625
- +
x $6.975
Ext. Price: $8370.00
MOQ: 1200
Mult: 1200
SPQ: 1200
4100-77K18BB777
TE Connectivity
Quantity: 100
Ship Date: 7-9 working days
14+
30+ $3.4104
- +
x $3.4104
Ext. Price: $102.31
MOQ: 30
Mult: 1
SPQ: 1
2-1879391-6
TE Connectivity
Quantity: 0
Ship Date: 6-14 working days
350+ $2.7664
- +
x $2.7664
Ext. Price: $968.24
MOQ: 350
Mult: 350
SPQ: 350

Specialty Transformers

Other Transformers refers to a class of neural network architectures that extend the capabilities of the original Transformer model, which was introduced in the paper "Attention Is All You Need" by Vaswani et al. in 2017. The original Transformer model revolutionized the field of natural language processing (NLP) with its use of self-attention mechanisms to process sequences of data, such as text or time series.

Definition:
Other Transformers are variations or extensions of the basic Transformer architecture, designed to address specific challenges or to improve performance in various tasks. They often incorporate additional layers, attention mechanisms, or training techniques to enhance the model's capabilities.

Functions:
1. Enhanced Attention Mechanisms: Some Transformers introduce new types of attention, such as multi-head attention, which allows the model to focus on different parts of the input sequence simultaneously.
2. Positional Encoding: To preserve the order of sequence data, positional encodings are added to the input embeddings.
3. Layer Normalization: This technique is used to stabilize the training of deep networks by normalizing the inputs to each layer.
4. Feedforward Networks: Each Transformer layer includes a feedforward neural network that processes the attention outputs.
5. Residual Connections: These connections help in training deeper networks by adding the output of a layer to its input before passing it to the next layer.

Applications:
- Natural Language Understanding (NLU): For tasks like sentiment analysis, question answering, and text classification.
- Machine Translation: To translate text from one language to another.
- Speech Recognition: Transcribing spoken language into written text.
- Time Series Analysis: For forecasting and pattern recognition in sequential data.
- Image Recognition: Some Transformers have been adapted for computer vision tasks.

Selection Criteria:
When choosing an Other Transformer model, consider the following:
1. Task Specificity: The model should be suitable for the specific task at hand, whether it's translation, summarization, or classification.
2. Data Size and Quality: Larger and more diverse datasets may require more complex models.
3. Computational Resources: More sophisticated models require more computational power and memory.
4. Training Time: Complex models may take longer to train.
5. Performance Metrics: Consider the model's performance on benchmarks relevant to your task.
6. Scalability: The model should be able to scale with the size of the data and the complexity of the task.

In summary, Other Transformers are a diverse family of models that build upon the foundational concepts of the original Transformer to address a wide range of challenges in machine learning and artificial intelligence. The choice of a specific model depends on the requirements of the task, the available data, and the computational resources.
Please refer to the product rule book for details.