This media is not supported in your browser
VIEW IN TELEGRAM
# Clone and checkout
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools
# Build and run
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Machine learning books and papers
با عرض سلام اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين. ExKG-LLM: Leveraging Large Language Models for Automated Expan- sion of Cognitive Neuroscience Knowledge Graphs Abstract Objective:…
با عرض سلام اخرين فرصت مشاركت در اين مقاله تا فردا شب...!
04. CNN Transfer Learning.pdf
2.1 MB
📚 Transfer Learning for CNNs: Leveraging Pre-trained Models
Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).
🌐 Why Transfer Learning?
1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.
💸 How Transfer Learning Works:
1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.
🔊 Common Transfer Learning Scenarios:
1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.
Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.
🚀 Common Used Transfer Learning Meathods:
1️⃣. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.
2️⃣ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.
3️⃣ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.
4️⃣ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.
5️⃣ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.
6️⃣ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.
7️⃣ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.
@Machine_learn
Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).
🌐 Why Transfer Learning?
1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.
💸 How Transfer Learning Works:
1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.
🔊 Common Transfer Learning Scenarios:
1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.
Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.
🚀 Common Used Transfer Learning Meathods:
1️⃣. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.
2️⃣ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.
3️⃣ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.
4️⃣ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.
5️⃣ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.
6️⃣ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.
7️⃣ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.
@Machine_learn
Large Language Models Course: Learn by Doing LLM Projects
🖥 Github: https://github.com/peremartra/Large-Language-Model-Notebooks-Course
📕 Paper: https://doi.org/10.31219/osf.io/qgxea
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation
Paper: https://arxiv.org/pdf/2409.13731v3.pdf
Code: https://github.com/openspg/kag
Dataset: 2WikiMultiHopQA
🔸 @Machine_learn
Paper: https://arxiv.org/pdf/2409.13731v3.pdf
Code: https://github.com/openspg/kag
Dataset: 2WikiMultiHopQA
Please open Telegram to view this post
VIEW IN TELEGRAM
CogAgent: A Visual Language Model for GUI Agents
Paper: https://arxiv.org/pdf/2312.08914v3.pdf
CVPR 2024: http://openaccess.thecvf.com//content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
Code1: https://github.com/thudm/cogvlm
Code2: https://github.com/digirl-agent/digirl
Code3: https://github.com/THUDM/CogAgent
Dataset: TextVQA
💠 @Machine_learn
Paper: https://arxiv.org/pdf/2312.08914v3.pdf
CVPR 2024: http://openaccess.thecvf.com//content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
Code1: https://github.com/thudm/cogvlm
Code2: https://github.com/digirl-agent/digirl
Code3: https://github.com/THUDM/CogAgent
Dataset: TextVQA
Please open Telegram to view this post
VIEW IN TELEGRAM
امشب اخرین فرصت برای مشارکت در این مقاله هستش...!🔸 🔸
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
# Install from PyPI
pip install outetts
# Interface Usage
import outetts
# Configure the model
model_config = outetts.HFModelConfig_v1(
model_path="OuteAI/OuteTTS-0.2-500M",
language="en", # Supported languages in v0.2: en, zh, ja, ko
)
# Initialize the interface
interface = outetts.InterfaceHF(model_version="0.2", cfg=model_config)
# Optional: Create a speaker profile (use a 10-15 second audio clip)
speaker = interface.create_speaker(
audio_path="path/to/audio/file",
transcript="Transcription of the audio file."
)
# Optional: Load speaker from default presets
interface.print_default_speakers()
speaker = interface.load_default_speaker(name="male_1")
output = interface.generate(
text="%Prompt Text%%.",
temperature=0.1,
repetition_penalty=1.1,
max_length=4096,
# Optional: Use a speaker profile
speaker=speaker,
)
# Save the synthesized speech to a file
output.save("output.wav")
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
# Install from PyPI
pip install neuzip
# Use Neuzip for Pytorch model
model: torch.nn.Module = # your model
+ manager = neuzip.Manager()
+ model = manager.convert(model)
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
با عرض سلام پروژه Biopars رو شروع كرديم نفر ٥ ام از اين مقاله رو نياز داريم.
این کار تحت نظر استاد
Rex (Zhitao) Ying
انجام میشه.
link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en
BioPars: a pre-trained biomedical large language model for persian biomedical text mining.
١- مراحل اوليه: جمع اوري متن هاي فارسي بيولوژيكي از منابع (...)
٢- پيش پردازش متن ها و تميز كردن متن ها
٣- اموزش ترنسفورمرها ي مورد نظر
٤- استفاده از بردارها ي اموزش داده شده در سه تسك (...)
هزينه سرور به ازاي هر ساعت ١.٢ دلار مي باشد. و حدود ٢ هزار ساعت براي اموزش مدل زباني نياز ميباشد.
دوستاني كه نياز دارن مي تونن به تيم ما اضافه بشن
🔸 🔸 🔸 🔸 🔸
@Raminmousa
این کار تحت نظر استاد
Rex (Zhitao) Ying
انجام میشه.
link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en
BioPars: a pre-trained biomedical large language model for persian biomedical text mining.
١- مراحل اوليه: جمع اوري متن هاي فارسي بيولوژيكي از منابع (...)
٢- پيش پردازش متن ها و تميز كردن متن ها
٣- اموزش ترنسفورمرها ي مورد نظر
٤- استفاده از بردارها ي اموزش داده شده در سه تسك (...)
هزينه سرور به ازاي هر ساعت ١.٢ دلار مي باشد. و حدود ٢ هزار ساعت براي اموزش مدل زباني نياز ميباشد.
دوستاني كه نياز دارن مي تونن به تيم ما اضافه بشن
@Raminmousa
Please open Telegram to view this post
VIEW IN TELEGRAM
Machine learning books and papers
با عرض سلام پروژه Biopars رو شروع كرديم نفر ٥ ام از اين مقاله رو نياز داريم. این کار تحت نظر استاد Rex (Zhitao) Ying انجام میشه. link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en BioPars: a pre-trained biomedical large language model…
هزینه نهایی برای این کار رو به ۲۵ میلیون کاهش دادیم برای نفر ۵ ...!🔥
Please open Telegram to view this post
VIEW IN TELEGRAM