New research papers and github codes
🟢 Motivo
🟡 Paper 🟡 Demo 🟡 Github
🟢 Video Seal
🟡 Paper 🟡 Demo 🟡 Github
🟢 Flow Matching
🟡 Paper 🟡 Github
🟢 Explore Theory-of-Mind
🟡 Paper 🟡 Github 🟡 Dataset
🟢 Large Concept Model (LCM)
🟡 Paper 🟡 Github
🟢 Dynamic Byte Latent Transformer
🟡 Paper 🟡 Github
🟢 Memory Layers.
🟡 Paper 🟡 Github
🟢 EvalGym
🟡 Paper 🟡 Github
🟢 CLIP 1.2
🟡 Paper 🟡 Github 🟡 Dataset 🟡 Model
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Meta
Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models | Research - AI at Meta
Unsupervised reinforcement learning (RL) aims at pre-training agents that can solve a wide range of downstream tasks in complex environments. Despite...
Forwarded from Papers
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Telegram
Papers
در اين كانال قرار مقالاتي كه كار ميكنيم رو به اشتراك بزاريم.
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa
Forwarded from Papers
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
هزينه مشاركت ١٢ ميليون
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
هزينه مشاركت ١٢ ميليون
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Telegram
Papers
در اين كانال قرار مقالاتي كه كار ميكنيم رو به اشتراك بزاريم.
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa
This media is not supported in your browser
VIEW IN TELEGRAM
# Clone and checkout
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools
# Build and run
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Machine learning books and papers
با عرض سلام اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين. ExKG-LLM: Leveraging Large Language Models for Automated Expan- sion of Cognitive Neuroscience Knowledge Graphs Abstract Objective:…
با عرض سلام اخرين فرصت مشاركت در اين مقاله تا فردا شب...!
04. CNN Transfer Learning.pdf
2.1 MB
📚 Transfer Learning for CNNs: Leveraging Pre-trained Models
Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).
🌐 Why Transfer Learning?
1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.
💸 How Transfer Learning Works:
1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.
🔊 Common Transfer Learning Scenarios:
1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.
Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.
🚀 Common Used Transfer Learning Meathods:
1️⃣. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.
2️⃣ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.
3️⃣ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.
4️⃣ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.
5️⃣ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.
6️⃣ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.
7️⃣ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.
@Machine_learn
Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).
🌐 Why Transfer Learning?
1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.
💸 How Transfer Learning Works:
1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.
🔊 Common Transfer Learning Scenarios:
1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.
Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.
🚀 Common Used Transfer Learning Meathods:
1️⃣. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.
2️⃣ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.
3️⃣ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.
4️⃣ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.
5️⃣ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.
6️⃣ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.
7️⃣ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.
@Machine_learn
Large Language Models Course: Learn by Doing LLM Projects
🖥 Github: https://github.com/peremartra/Large-Language-Model-Notebooks-Course
📕 Paper: https://doi.org/10.31219/osf.io/qgxea
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation
Paper: https://arxiv.org/pdf/2409.13731v3.pdf
Code: https://github.com/openspg/kag
Dataset: 2WikiMultiHopQA
🔸 @Machine_learn
Paper: https://arxiv.org/pdf/2409.13731v3.pdf
Code: https://github.com/openspg/kag
Dataset: 2WikiMultiHopQA
Please open Telegram to view this post
VIEW IN TELEGRAM
CogAgent: A Visual Language Model for GUI Agents
Paper: https://arxiv.org/pdf/2312.08914v3.pdf
CVPR 2024: http://openaccess.thecvf.com//content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
Code1: https://github.com/thudm/cogvlm
Code2: https://github.com/digirl-agent/digirl
Code3: https://github.com/THUDM/CogAgent
Dataset: TextVQA
💠 @Machine_learn
Paper: https://arxiv.org/pdf/2312.08914v3.pdf
CVPR 2024: http://openaccess.thecvf.com//content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
Code1: https://github.com/thudm/cogvlm
Code2: https://github.com/digirl-agent/digirl
Code3: https://github.com/THUDM/CogAgent
Dataset: TextVQA
Please open Telegram to view this post
VIEW IN TELEGRAM
امشب اخرین فرصت برای مشارکت در این مقاله هستش...!🔸 🔸
Please open Telegram to view this post
VIEW IN TELEGRAM