Probability, Random Processes, and Statistical Analysis Applications to Communications, Signal Processing, Queueing Theory and Mathematical Finance
๐ Book
@Machine_learn
๐ Book
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
ุฏูุณุชุงู ุฎุฑูุฌู ุงูู ูุงุฑ ูฃ ุชุง ู
ูุงูู ุฎูุงูุฏ ุจูุฏ...!
New research papers and github codes
๐ข Motivo
๐ก Paper ๐ก Demo ๐ก Github
๐ข Video Seal
๐ก Paper ๐ก Demo ๐ก Github
๐ข Flow Matching
๐ก Paper ๐ก Github
๐ข Explore Theory-of-Mind
๐ก Paper ๐ก Github ๐ก Dataset
๐ข Large Concept Model (LCM)
๐ก Paper ๐ก Github
๐ข Dynamic Byte Latent Transformer
๐ก Paper ๐ก Github
๐ข Memory Layers.
๐ก Paper ๐ก Github
๐ข EvalGym
๐ก Paper ๐ก Github
๐ข CLIP 1.2
๐ก Paper ๐ก Github ๐ก Dataset ๐ก Model
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Meta
Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models | Research - AI at Meta
Unsupervised reinforcement learning (RL) aims at pre-training agents that can solve a wide range of downstream tasks in complex environments. Despite...
Forwarded from Papers
ุจุง ุนุฑุถ ุณูุงู
ุงูููู ู ูุงูู ู LLM ู ุง ุฏุฑ ู ุฑุญูู ู ุณุงุจู ูุช. ููุฑ ฺูุงุฑู ูุงุจู ุงุถุงูู ูุฑุฏู ู ู ุจุงุดุฏ. ุฌูุช ู ุดุงุฑูุช ุจู ุงูุฏู ุจูุฏู ู ุฑุงุฌุนู ูููู.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐(๐log ๐), space complexity became less efficient, rising to ๐(๐2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
ุงูููู ู ูุงูู ู LLM ู ุง ุฏุฑ ู ุฑุญูู ู ุณุงุจู ูุช. ููุฑ ฺูุงุฑู ูุงุจู ุงุถุงูู ูุฑุฏู ู ู ุจุงุดุฏ. ุฌูุช ู ุดุงุฑูุช ุจู ุงูุฏู ุจูุฏู ู ุฑุงุฌุนู ูููู.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐(๐log ๐), space complexity became less efficient, rising to ๐(๐2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Telegram
Papers
ุฏุฑ ุงูู ูุงูุงู ูุฑุงุฑ ู
ูุงูุงุชู ูู ูุงุฑ ู
ููููู
ุฑู ุจู ุงุดุชุฑุงู ุจุฒุงุฑูู
.
ูุฑุงุฑ ุงุฒ ูู ุญู ุงูุช ูููู ู ูุงุฑูุงู ุฌุฏูุฏู
ุงุฑุงุฆู ุจุฏูู
@Raminmousa
ูุฑุงุฑ ุงุฒ ูู ุญู ุงูุช ูููู ู ูุงุฑูุงู ุฌุฏูุฏู
ุงุฑุงุฆู ุจุฏูู
@Raminmousa
Forwarded from Papers
ุจุง ุนุฑุถ ุณูุงู
ุงูููู ู ูุงูู ู LLM ู ุง ุฏุฑ ู ุฑุญูู ู ุณุงุจู ูุช. ููุฑ ฺูุงุฑู ูุงุจู ุงุถุงูู ูุฑุฏู ู ู ุจุงุดุฏ. ุฌูุช ู ุดุงุฑูุช ุจู ุงูุฏู ุจูุฏู ู ุฑุงุฌุนู ูููู.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐(๐log ๐), space complexity became less efficient, rising to ๐(๐2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
ูุฒููู ู ุดุงุฑูุช ูกูข ู ููููู
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
ุงูููู ู ูุงูู ู LLM ู ุง ุฏุฑ ู ุฑุญูู ู ุณุงุจู ูุช. ููุฑ ฺูุงุฑู ูุงุจู ุงุถุงูู ูุฑุฏู ู ู ุจุงุดุฏ. ุฌูุช ู ุดุงุฑูุช ุจู ุงูุฏู ุจูุฏู ู ุฑุงุฌุนู ูููู.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐(๐log ๐), space complexity became less efficient, rising to ๐(๐2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
ูุฒููู ู ุดุงุฑูุช ูกูข ู ููููู
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Telegram
Papers
ุฏุฑ ุงูู ูุงูุงู ูุฑุงุฑ ู
ูุงูุงุชู ูู ูุงุฑ ู
ููููู
ุฑู ุจู ุงุดุชุฑุงู ุจุฒุงุฑูู
.
ูุฑุงุฑ ุงุฒ ูู ุญู ุงูุช ูููู ู ูุงุฑูุงู ุฌุฏูุฏู
ุงุฑุงุฆู ุจุฏูู
@Raminmousa
ูุฑุงุฑ ุงุฒ ูู ุญู ุงูุช ูููู ู ูุงุฑูุงู ุฌุฏูุฏู
ุงุฑุงุฆู ุจุฏูู
@Raminmousa
This media is not supported in your browser
VIEW IN TELEGRAM
# Clone and checkout
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools
# Build and run
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Machine learning books and papers
ุจุง ุนุฑุถ ุณูุงู
ุงูููู ู
ูุงูู ู LLM ู
ุง ุฏุฑ ู
ุฑุญูู ู ุณุงุจู
ูุช. ููุฑ ฺูุงุฑู
ูุงุจู ุงุถุงูู ูุฑุฏู ู
ู ุจุงุดุฏ. ุฌูุช ู
ุดุงุฑูุช ุจู ุงูุฏู ุจูุฏู ู
ุฑุงุฌุนู ูููู. ExKG-LLM: Leveraging Large Language Models for Automated Expan- sion of Cognitive Neuroscience Knowledge Graphs Abstract Objective:โฆ
ุจุง ุนุฑุถ ุณูุงู
ุงุฎุฑูู ูุฑุตุช ู
ุดุงุฑูุช ุฏุฑ ุงูู ู
ูุงูู ุชุง ูุฑุฏุง ุดุจ...!
04. CNN Transfer Learning.pdf
2.1 MB
๐ Transfer Learning for CNNs: Leveraging Pre-trained Models
Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).
๐ Why Transfer Learning?
1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.
๐ธ How Transfer Learning Works:
1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.
๐ Common Transfer Learning Scenarios:
1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.
Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.
๐ Common Used Transfer Learning Meathods:
1๏ธโฃ. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.
2๏ธโฃ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.
3๏ธโฃ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.
4๏ธโฃ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.
5๏ธโฃ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.
6๏ธโฃ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.
7๏ธโฃ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.
@Machine_learn
Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).
๐ Why Transfer Learning?
1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.
๐ธ How Transfer Learning Works:
1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.
๐ Common Transfer Learning Scenarios:
1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.
Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.
๐ Common Used Transfer Learning Meathods:
1๏ธโฃ. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.
2๏ธโฃ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.
3๏ธโฃ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.
4๏ธโฃ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.
5๏ธโฃ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.
6๏ธโฃ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.
7๏ธโฃ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.
@Machine_learn
Large Language Models Course: Learn by Doing LLM Projects
๐ฅ Github: https://github.com/peremartra/Large-Language-Model-Notebooks-Course
๐ Paper: https://doi.org/10.31219/osf.io/qgxea
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM