تنها نفر ۴ ام از این کار مشترک باقی مونده
شروع کار ۱ دی ماه هستش. جهت همکاری به ایدی بنده پیام بدین.
@Raminmousa
شروع کار ۱ دی ماه هستش. جهت همکاری به ایدی بنده پیام بدین.
@Raminmousa
⏩SmolLM2-1.7B
⏩SmolLM2-360M
⏩SmolLM2-135M
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
New o3 OpenAI model is changing the game!
For a long time, ARC was seen as proof that AI models “can’t think.” The argument went: if they truly could, why do they perform so poorly on this benchmark?
Well, those days are over. The o3 model demonstrates not only the ability to think but also the capability to tackle tasks once considered out of reach.
👀 Check out the full breakdown of this breakthrough: https://arcprize.org/blog/oai-o3-pub-breakthrough
It might be time to rethink what AI can achieve. Looking forward to the release!
@Machine_learn
For a long time, ARC was seen as proof that AI models “can’t think.” The argument went: if they truly could, why do they perform so poorly on this benchmark?
Well, those days are over. The o3 model demonstrates not only the ability to think but also the capability to tackle tasks once considered out of reach.
👀 Check out the full breakdown of this breakthrough: https://arcprize.org/blog/oai-o3-pub-breakthrough
It might be time to rethink what AI can achieve. Looking forward to the release!
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Probability, Random Processes, and Statistical Analysis Applications to Communications, Signal Processing, Queueing Theory and Mathematical Finance
📕 Book
@Machine_learn
📕 Book
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
New research papers and github codes
🟢 Motivo
🟡 Paper 🟡 Demo 🟡 Github
🟢 Video Seal
🟡 Paper 🟡 Demo 🟡 Github
🟢 Flow Matching
🟡 Paper 🟡 Github
🟢 Explore Theory-of-Mind
🟡 Paper 🟡 Github 🟡 Dataset
🟢 Large Concept Model (LCM)
🟡 Paper 🟡 Github
🟢 Dynamic Byte Latent Transformer
🟡 Paper 🟡 Github
🟢 Memory Layers.
🟡 Paper 🟡 Github
🟢 EvalGym
🟡 Paper 🟡 Github
🟢 CLIP 1.2
🟡 Paper 🟡 Github 🟡 Dataset 🟡 Model
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Meta
Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models | Research - AI at Meta
Unsupervised reinforcement learning (RL) aims at pre-training agents that can solve a wide range of downstream tasks in complex environments. Despite...
Forwarded from Papers
با عرض سلام
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
اولين مقاله ي LLM ما در مرحله ي سابميت. نفر چهارم قابل اضافه كردن مي باشد. جهت مشاركت به ايدي بنده مراجعه كنين.
ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs
Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphs’ accuracy, completeness and usefulness in cognitive neuroscience.
Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.
Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to 𝑂(𝑛log 𝑛), space complexity became less efficient, rising to 𝑂(𝑛2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Telegram
Papers
در اين كانال قرار مقالاتي كه كار ميكنيم رو به اشتراك بزاريم.
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa