Telegram Web Link
Introduction to Data Science – Lecture Material

🔗 Github

@Machine_learn
تنها نفر ۴ ام از این کار مشترک باقی مونده
شروع کار ۱ دی ماه هستش. جهت همکاری به ایدی بنده پیام بدین.
@Raminmousa
Practitioner Guide for Creating Effective Prompts in Large Language Models

🔗 Paper

@Machine_learn
🌟 SmolLM2



SmolLM2-1.7B🟢SmolLM2-1.7B-Instruct🟢Instruct GGUF

SmolLM2-360M🟠SmolLM2-360M-Instruct 🟠Instruct GGUF

SmolLM2-135M 🟠SmolLM2-135M-Instruct 🟠Instruct GGUF от комьюнити


▶️SmolLM2-1.7B :

from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))


📌Apache 2.0 License.


🟡Demo SmolLM2 1.7B


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Perfect Roadmap To Learn Data Science In 2024

📖 Book

@Machine_learn
New o3 OpenAI model is changing the game!

For a long time, ARC was seen as proof that AI models “can’t think.” The argument went: if they truly could, why do they perform so poorly on this benchmark?

Well, those days are over. The o3 model demonstrates not only the ability to think but also the capability to tackle tasks once considered out of reach.

👀 Check out the full breakdown of this breakthrough: https://arcprize.org/blog/oai-o3-pub-breakthrough

It might be time to rethink what AI can achieve. Looking forward to the release!

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
The Art of Data Science.pdf
6.2 MB
Book: The Art of Data Science
Authors: Roger D. Peng & Elizabeth Matsui

@Machine_learn
Probability, Random Processes, and Statistical Analysis Applications to Communications, Signal Processing, Queueing Theory and Mathematical Finance

📕 Book


@Machine_learn
📑 Application of graph theory in liver research: A review

📎 Study paper

@Machine_learn
Building Blocks for Theoretical Computer Science

🎓 Link

@Machine_learn
🌟 AlphaFold 3

🟡Paper
🟡Demo
🖥GitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Github LLMs
Please open Telegram to view this post
VIEW IN TELEGRAM
دوستان خروجي اين كار ٣ تا مقاله خواهد بود...!
New research papers and github codes

🟢Motivo
🟡Paper 🟡Demo 🟡Github
🟢Video Seal
🟡Paper 🟡Demo 🟡Github
🟢Flow Matching
🟡Paper 🟡Github
🟢Explore Theory-of-Mind
🟡Paper 🟡Github 🟡Dataset
🟢Large Concept Model (LCM)
🟡Paper 🟡Github
🟢Dynamic Byte Latent Transformer
🟡Paper 🟡Github
🟢Memory Layers.
🟡Paper 🟡Github
🟢EvalGym
🟡Paper 🟡Github
🟢CLIP 1.2
🟡Paper 🟡Github 🟡Dataset 🟡Model

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
2025/02/24 01:05:55
Back to Top
HTML Embed Code: