Telegram Web Link
Forwarded from Papers
با عرض سلام نفر ۳ از مقاله زیر رو نیاز داریم.

Title: hybrid deep learnings and machine learning frameworks
for air quality prediction
during the COVID‑19 pandemic

journal: https://www.sciencedirect.com/journal/expert-systems-with-applications
if:7.5
در این مقاله تاثیر ۲۶ مدل ansemble و ترکیبی رو برای پیش بینی کیفیت هوا در بازه ۱ روزه ۳ روزه و ۷ روزه بررسی کردیم. جهت شرکت در این مقاله به ایدی بنده پیام بدین.


@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
WIS Python programming course started in 2024.04

📖 Github

@Machine_learn
Large language models (LLMs): survey, technical frameworks,
and future challenges

https://link.springer.com/content/pdf/10.1007/s10462-024-10888-y.pdf

@Machine_learn
Forwarded from Papers
با عرض سلام در راستاي ادامه تحقيقات مشترك سعي داريم از ١ ام دي ماه روي حوزه ي LLM مدل ها كار كنيم.
این کار تحت نظر استاد
Rex (Zhitao) Ying
انجام میشه.
link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en
۲نفر براي همکاری نياز داريم.

BioPars: a pre-trained biomedical large language model for persian biomedical text mining.
١- مراحل اوليه: جمع اوري متن هاي فارسي بيولوژيكي از منابع (...)
٢- پيش پردازش متن ها و تميز كردن متن ها
٣- اموزش ترنسفورمرها ي مورد نظر
٤- استفاده از بردارها ي اموزش داده شده در سه تسك (...)
دوستاني كه مايل به مشاركت هستن مي تونين تا ١ دي بهم اطلاع بدن.
هزينه سرور به ازاي هر ساعت ١.٢ دلار مي باشد. و حدود ٢ هزار ساعت براي اموزش مدل زباني نياز ميباشد. هزينه به ترتيب براي نفرات علاوه بر انجام تسك ها به صورت زير مي باشد.
🔹نفر چهارم 500 دلار
🔺نفر پنجم 400 دلار
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Please open Telegram to view this post
VIEW IN TELEGRAM
📃 Large language models and their applications in bioinformatics

📎 Study the paper

@Machine_learn
⚡️ Byte Latent Transformer: Patches Scale Better Than Tokens

Byte Latent Transformer architecture (BLTs), a new byte-level LLM architecture that for the first time, matches tokenization-based LLM performance at scale, with significant improvements in inference efficiency and robustness.

🖥 Github: https://github.com/facebookresearch/blt

📕 Paper: https://arxiv.org/abs/2412.09871v1

🌟 Dataset: https://paperswithcode.com/dataset/mmlu

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
📃A Comprehensive Survey on Automatic Knowledge Graph Construction

📎 Study paper

@Machine_learn
🀄 GuoFeng Webnovel: A Discourse-Level and Multilingual Corpus of Web Fiction

🖥 Github: https://github.com/longyuewangdcu/guofeng-webnovel

📕 Paper: https://arxiv.org/abs/2412.11732v1

🌟 Dataset: www2.statmt.org/wmt24/literary-trans

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Introduction to Data Science – Lecture Material

🔗 Github

@Machine_learn
تنها نفر ۴ ام از این کار مشترک باقی مونده
شروع کار ۱ دی ماه هستش. جهت همکاری به ایدی بنده پیام بدین.
@Raminmousa
Practitioner Guide for Creating Effective Prompts in Large Language Models

🔗 Paper

@Machine_learn
🌟 SmolLM2



SmolLM2-1.7B🟢SmolLM2-1.7B-Instruct🟢Instruct GGUF

SmolLM2-360M🟠SmolLM2-360M-Instruct 🟠Instruct GGUF

SmolLM2-135M 🟠SmolLM2-135M-Instruct 🟠Instruct GGUF от комьюнити


▶️SmolLM2-1.7B :

from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))


📌Apache 2.0 License.


🟡Demo SmolLM2 1.7B


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Perfect Roadmap To Learn Data Science In 2024

📖 Book

@Machine_learn
New o3 OpenAI model is changing the game!

For a long time, ARC was seen as proof that AI models “can’t think.” The argument went: if they truly could, why do they perform so poorly on this benchmark?

Well, those days are over. The o3 model demonstrates not only the ability to think but also the capability to tackle tasks once considered out of reach.

👀 Check out the full breakdown of this breakthrough: https://arcprize.org/blog/oai-o3-pub-breakthrough

It might be time to rethink what AI can achieve. Looking forward to the release!

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
2025/02/24 22:18:10
Back to Top
HTML Embed Code: