Telegram Web Link
Practitioner Guide for Creating Effective Prompts in Large Language Models

๐Ÿ”— Paper

@Machine_learn
๐ŸŒŸ SmolLM2



โฉSmolLM2-1.7B๐ŸŸขSmolLM2-1.7B-Instruct๐ŸŸขInstruct GGUF

โฉSmolLM2-360M๐ŸŸ SmolLM2-360M-Instruct ๐ŸŸ Instruct GGUF

โฉSmolLM2-135M ๐ŸŸ SmolLM2-135M-Instruct ๐ŸŸ Instruct GGUF ะพั‚ ะบะพะผัŒัŽะฝะธั‚ะธ


โ–ถ๏ธSmolLM2-1.7B :

from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))


๐Ÿ“ŒApache 2.0 License.


๐ŸŸกDemo SmolLM2 1.7B


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Perfect Roadmap To Learn Data Science In 2024

๐Ÿ“– Book

@Machine_learn
Gemini API Cookbook

๐Ÿ“š Github


@Machine_learn
New o3 OpenAI model is changing the game!

For a long time, ARC was seen as proof that AI models โ€œcanโ€™t think.โ€ The argument went: if they truly could, why do they perform so poorly on this benchmark?

Well, those days are over. The o3 model demonstrates not only the ability to think but also the capability to tackle tasks once considered out of reach.

๐Ÿ‘€ Check out the full breakdown of this breakthrough: https://arcprize.org/blog/oai-o3-pub-breakthrough

It might be time to rethink what AI can achieve. Looking forward to the release!

@Machine_learn
01. Time Series Visualization from Raw Data to Insights.pdf
11.7 MB
Time Series Visualization from Raw Data to Insights
๐Ÿ”น #Code

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
The Art of Data Science.pdf
6.2 MB
Book: The Art of Data Science
Authors: Roger D. Peng & Elizabeth Matsui

@Machine_learn
Probability, Random Processes, and Statistical Analysis Applications to Communications, Signal Processing, Queueing Theory and Mathematical Finance

๐Ÿ“• Book


@Machine_learn
๐Ÿ“‘ Application of graph theory in liver research: A review

๐Ÿ“Ž Study paper

@Machine_learn
Building Blocks for Theoretical Computer Science

๐ŸŽ“ Link

@Machine_learn
๐ŸŒŸ AlphaFold 3

๐ŸŸกPaper
๐ŸŸกDemo
๐Ÿ–ฅGitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Github LLMs
๐ŸŒŸ LLaMA-Mesh:
๐ŸŸกArxiv
๐Ÿ–ฅGitHub

https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
ุฏูˆุณุชุงู† ุฎุฑูˆุฌูŠ ุงูŠู† ูƒุงุฑ ูฃ ุชุง ู…ู‚ุงู„ู‡ ุฎูˆุงู‡ุฏ ุจูˆุฏ...!
New research papers and github codes

๐ŸŸขMotivo
๐ŸŸกPaper ๐ŸŸกDemo ๐ŸŸกGithub
๐ŸŸขVideo Seal
๐ŸŸกPaper ๐ŸŸกDemo ๐ŸŸกGithub
๐ŸŸขFlow Matching
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขExplore Theory-of-Mind
๐ŸŸกPaper ๐ŸŸกGithub ๐ŸŸกDataset
๐ŸŸขLarge Concept Model (LCM)
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขDynamic Byte Latent Transformer
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขMemory Layers.
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขEvalGym
๐ŸŸกPaper ๐ŸŸกGithub
๐ŸŸขCLIP 1.2
๐ŸŸกPaper ๐ŸŸกGithub ๐ŸŸกDataset ๐ŸŸกModel

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
ุจุง ุนุฑุถ ุณู„ุงู…
ุงูˆู„ูŠู† ู…ู‚ุงู„ู‡ ูŠ LLM ู…ุง ุฏุฑ ู…ุฑุญู„ู‡ ูŠ ุณุงุจู…ูŠุช. ู†ูุฑ ฺ†ู‡ุงุฑู… ู‚ุงุจู„ ุงุถุงูู‡ ูƒุฑุฏู† ู…ูŠ ุจุงุดุฏ. ุฌู‡ุช ู…ุดุงุฑูƒุช ุจู‡ ุงูŠุฏูŠ ุจู†ุฏู‡ ู…ุฑุงุฌุนู‡ ูƒู†ูŠู†.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ€™ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐‘‚(๐‘›log ๐‘›), space complexity became less efficient, rising to ๐‘‚(๐‘›2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
GAN.pdf
794.1 KB
Text-to-Image Generation with GANs
#GANs
@Machine_learn
Forwarded from Papers
ุจุง ุนุฑุถ ุณู„ุงู…
ุงูˆู„ูŠู† ู…ู‚ุงู„ู‡ ูŠ LLM ู…ุง ุฏุฑ ู…ุฑุญู„ู‡ ูŠ ุณุงุจู…ูŠุช. ู†ูุฑ ฺ†ู‡ุงุฑู… ู‚ุงุจู„ ุงุถุงูู‡ ูƒุฑุฏู† ู…ูŠ ุจุงุดุฏ. ุฌู‡ุช ู…ุดุงุฑูƒุช ุจู‡ ุงูŠุฏูŠ ุจู†ุฏู‡ ู…ุฑุงุฌุนู‡ ูƒู†ูŠู†.


ExKG-LLM: Leveraging Large Language Models for Automated Expan-
sion of Cognitive Neuroscience Knowledge Graphs


Abstract
Objective: This paper introduces ExKG-LLM, an innovative framework designed to automate expanding cognitive neuroscience knowledge graphs (CNKG) using large-scale linguistic models (LLM). This model includes increasing knowledge graphsโ€™ accuracy, completeness and usefulness in cognitive neuroscience.

Method: To address the limitations of existing tools for creating knowledge accounts, this is especially true in dealing with the complex hierarchical relationships within the cognitive neuroscience literature. We use a large dataset of scientific paper and clinical reports, the ExKG-LLM framework, new entities and relationships in CNKG to apply state - state of the art LLM to extract, optimize and integrate, evaluating performance based on
metrics such as precision, recall and graph density.

Findings: The ExKG-LLM framework achieved significant improvements, including precision of 0.80 (increase of 6.67%), recall of 0.81 (increase of 15.71%), F1 score of 0.805 (increase of 11.81%), and number of edge nodes increased by 21.13% and 31.92%, respectively. Also, the density of the graph decreased slightly. Reflecting the broader but more fragmented structure, engagement rates have also increased by 20%, highlighting areas where stability needs improvement. From the perspective of a complex network, increasing the diameter of CNKG to 15 compared to 13 shows that although the size of ExKG-LLM has increased, more steps are now required to discover additional nodes.Although time complexity improved to ๐‘‚(๐‘›log ๐‘›), space complexity became less efficient, rising to ๐‘‚(๐‘›2), indicating higher memory usage for managing the expanded
graph.
journal: https://www.inderscience.com/jhome.php?jcode=ijdmb


ู‡ุฒูŠู†ู‡ ู…ุดุงุฑูƒุช ูกูข ู…ูŠู„ูŠูˆู†
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
2025/02/24 02:41:01
Back to Top
HTML Embed Code: