Telegram Web Link
GAN.pdf
794.1 KB
Text-to-Image Generation with GANs
#GANs
@Machine_learn
Forwarded from Papers
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb


هزينه مشاركت ١٢ ميليون
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
This media is not supported in your browser
VIEW IN TELEGRAM
🌟 RLtools

🟢TD3 - Pendulum, Racing Car, MuJoCo Ant-v4, Acrobot;
🟢PPO - Pendulum, Racing Car, MuJoCo Ant-v4 (CPU), MuJoCo Ant-v4 (CUDA);
🟢Multi-Agent PPO - Bottleneck;
🟢SAC - Pendulum (CPU), Pendulum (CUDA), Acrobot.





# Clone and checkout
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools

# Build and run
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum





🟡Arxiv
🟡RLTools Design Studio
🟡Demo
🟡Zoo Experiment Tracking
🟡Google Collab (Python Interface)
🖥GitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
04. CNN Transfer Learning.pdf
2.1 MB
📚 Transfer Learning for CNNs: Leveraging Pre-trained Models


Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. In the context of convolutional neural networks (CNNs), this means using a CNN that has been trained on a large dataset for one task (e.g., ImageNet) as a foundation for a new task (e.g., classifying medical images).


🌐 Why Transfer Learning?


1. Reduced Training Time: Training a CNN from scratch on a large dataset can be computationally expensive and time-consuming. Transfer learning allows you to leverage the knowledge learned by the pre-trained model, reducing training time significantly.
2. Improved Performance: Pre-trained models have often been trained on massive datasets, allowing them to learn general-purpose features that can be useful for a wide range of tasks. Using these pre-trained models can improve the performance of your new task.
3. Smaller Datasets: Transfer learning can be particularly useful when you have a small dataset for your new task. By using a pre-trained model, you can augment your limited data with the knowledge learned from the larger dataset.


💸 How Transfer Learning Works:


1. Choose a Pre-trained Model: Select a pre-trained CNN that is suitable for your task. Common choices include VGG16, ResNet, InceptionV3, and EfficientNet.
2. Freeze Layers: Typically, the earlier layers of a CNN learn general-purpose features, while the later layers learn more task-specific features. You can freeze the earlier layers of the pre-trained model to prevent them from being updated during training. This helps to preserve the learned features
3. Add New Layers: Add new layers, such as fully connected layers or convolutional layers, to the end of the pre-trained model. These layers will be trained on your new dataset to learn task-specific features.
4. Fine-tune: Train the new layers on your dataset while keeping the frozen layers fixed. This process is called fine-tuning.


🔊 Common Transfer Learning Scenarios:


1. Feature Extraction: Extract features from the pre-trained model and use them as input to a different model, such as a support vector machine (SVM) or a random forest.
2. Fine-tuning: Fine-tune the pre-trained model on your new dataset to adapt it to your specific task.
3. Hybrid Approach: Combine feature extraction and fine-tuning by extracting features from the pre-trained model and using them as input to a new model, while also fine-tuning some layers of the pre-trained model.


Transfer learning is a powerful technique that can significantly improve the performance and efficiency of CNNs, especially when working with limited datasets or time constraints.

🚀 Common Used Transfer Learning Meathods:

1️⃣. VGG16: A simple yet effective CNN architecture with multiple convolutional layers followed by max-pooling layers. It excels at image classification tasks.

2️⃣ . MobileNet: Designed for mobile and embedded vision applications, MobileNet uses depthwise separable convolutions to reduce the number of parameters and computational cost.

3️⃣ DenseNet: Connects each layer to every other layer, promoting feature reuse and improving information flow. It often achieves high accuracy with fewer parameters.

4️⃣ Inception: Employs a combination of different sized convolutional filters in parallel, capturing features at multiple scales. It's known for its efficient use of computational resources.

5️⃣ ResNet: Introduces residual connections, enabling the network to learn more complex features by allowing information to bypass layers. It addresses the vanishing gradient problem.

6️⃣ EfficientNet: A family of models that systematically scale up network width, depth, and resolution using a compound scaling method. It achieves state-of-the-art accuracy with improved efficiency.

7️⃣ NASNet: Leverages neural architecture search to automatically design efficient CNN architectures. It often outperforms manually designed models in terms of accuracy and efficiency.

@Machine_learn
Large Language Models Course: Learn by Doing LLM Projects

🖥 Github: https://github.com/peremartra/Large-Language-Model-Notebooks-Course

📕 Paper: https://doi.org/10.31219/osf.io/qgxea

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Python for Everybody Exploring Data Using Python 3

📓 book

@Machine_learn
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation

Paper: https://arxiv.org/pdf/2409.13731v3.pdf

Code: https://github.com/openspg/kag

Dataset: 2WikiMultiHopQA

🔸@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Arcade Academy - Learn Python

📖 Book

@Machine_learn
📄 RNA Sequencing Data: Hitchhiker's Guide to Expression Analysis


📎 Study the paper


@Machine_learn
Lecture notes: mathematics for artificial intelligence

📕 Link


@Machine_learn
امشب اخرین فرصت برای مشارکت در این مقاله هستش...!🔸🔸
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🌟 🌟 OuteTTS-0.2-500M

# Install from PyPI
pip install outetts

# Interface Usage
import outetts

# Configure the model
model_config = outetts.HFModelConfig_v1(
model_path="OuteAI/OuteTTS-0.2-500M",
language="en", # Supported languages in v0.2: en, zh, ja, ko
)

# Initialize the interface
interface = outetts.InterfaceHF(model_version="0.2", cfg=model_config)

# Optional: Create a speaker profile (use a 10-15 second audio clip)
speaker = interface.create_speaker(
audio_path="path/to/audio/file",
transcript="Transcription of the audio file."
)

# Optional: Load speaker from default presets
interface.print_default_speakers()
speaker = interface.load_default_speaker(name="male_1")

output = interface.generate(
text="%Prompt Text%%.",
temperature=0.1,
repetition_penalty=1.1,
max_length=4096,

# Optional: Use a speaker profile
speaker=speaker,
)

# Save the synthesized speech to a file
output.save("output.wav")


🟡Demo

🖥GitHub

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
⚡️ NeuZip

▶️

# Install from PyPI
pip install neuzip

# Use Neuzip for Pytorch model
model: torch.nn.Module = # your model
+ manager = neuzip.Manager()
+ model = manager.convert(model)



🟡Arxiv
🖥GitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
با عرض سلام پروژه Biopars رو شروع كرديم نفر ٥ ام از اين مقاله رو نياز داريم.
این کار تحت نظر استاد
Rex (Zhitao) Ying
انجام میشه.
link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en
BioPars: a pre-trained biomedical large language model for persian biomedical text mining.
١- مراحل اوليه: جمع اوري متن هاي فارسي بيولوژيكي از منابع (...)
٢- پيش پردازش متن ها و تميز كردن متن ها
٣- اموزش ترنسفورمرها ي مورد نظر
٤- استفاده از بردارها ي اموزش داده شده در سه تسك (...)

هزينه سرور به ازاي هر ساعت ١.٢ دلار مي باشد. و حدود ٢ هزار ساعت براي اموزش مدل زباني نياز ميباشد.

دوستاني كه نياز دارن مي تونن به تيم ما اضافه بشن
🔸🔸🔸🔸🔸

@Raminmousa
Please open Telegram to view this post
VIEW IN TELEGRAM
2025/02/24 10:09:13
Back to Top
HTML Embed Code: