Telegram Web Link
Machine learning books and papers pinned «با عرض سلام برای یکی از مقالاتمون نیازمند نفر اول داریم که co-author مقاله هم باشه. مجله ی ارسالی scientific report natue https://www.nature.com/srep/ می باشد. شرایط واگذاری رو در صورت نیاز می تونین با ایدی بنده ست کنین. @Raminmousa @Machine_learn…»
CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models

28 Mar 2025 · Zhihang Lin, Mingbao Lin, Yuan Xie, Rongrong Ji


Paper: https://arxiv.org/pdf/2503.22342v1.pdf

Code: https://github.com/lzhxmu/cppo

Datasets: GSM8K - MATH

@Machine_learn
Llama-Nemotron: Efficient Reasoning Models

📚 Paper

@Machine_learn
Introduction to Machine Learning
Laurent Younes

📚 Book

@Machine_learn
با عرض سلام دوره ی خصوصی SYFA را داریم برگذار میکنیم که هدف نحوه اشنایی با فرایند نگارش و چاپ مقالات می باشد. جلسات ۱ ساعته و خصوصی می باشند. که هر هفته به ازای هر شخص ۲ جلسه برگذار خواهد شد. جهت ثبت نام و ست کردن زمان با ایدی بنده در ارتباط باشین.

@Raminmousa
This media is not supported in your browser
VIEW IN TELEGRAM
VoRA: Vision as LoRA
#ByteDance introduces #VoRA (Vision as #LoRA) — a novel framework that transforms #LLMs into Multimodal Large Language Models (MLLMs) by integrating vision-specific LoRA layers.
All training data, source code, and model weights are openly available!

Key Resources:
Overview: https://t.ly/guNVN
Paper: arxiv.org/pdf/2503.20680
GitHub Repo: github.com/Hon-Wong/VoRA
Project Page: georgeluimmortal.github.io/vora-homepage.github.io

@Machine_learn
Forwarded from Papers
با عرض سلام
از اين مقاله نفرات ٤ و ٥ باقي مونده دوستاني كه مايل به همكاري هستن لطفا با بنده در ارتباط باشن.


یکی از ابزارهای خوبی که بنده تونستم توسعه بدم ابزار Stock Ai می باشد. در این ابزار از ۳۶۰ اندیکاتور استفاده کردم. گزارشات back test این ابزار در ویدیو های زیر موجود می باشد.

May 2024 :

https://youtu.be/aSS99lynMFQ?si=QSk8VVKhLqO_2Qi3

July 2014:

https://youtu.be/ThyZ0mZwsGk?si=FKPK7Hkz-mRx-752&t=209



@Raminmousa
Llama-Nemotron: Efficient Reasoning Models

📚 Paper

@Machine_learn
Introducing Continuous Thought Machines

📚 Paper

@Machine_learn
NVIDIA just open sourced Open Code Reasoning models - 32B, 14B AND 7B - APACHE 2.0 licensed 🔥

> Beats O3 mini & O1 (low) on LiveCodeBench 😍

Backed by OCR dataset the models are 30% token efficient than other equivalent Reasoning models

Works with llama.cpp, vLLM, transformers, TGI and more - check them out today!!

https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B

@Machine_learn
A New Efficient Hybrid Technique for Human Action Recognition Using 2D Conv-RBM and LSTM with Optimized Frame Selection


📕 Paper: https://www.mdpi.com/2227-7080/13/2/53

🔥 Datasets:
KTH: https://www.csc.kth.se/cvap/actions/
UCF Sports: https://www.crcv.ucf.edu/research/data-sets/ucf-sports-action/
HMDB51: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/

@Machine_learn
Comprehensive Analysis of Random Forest and XGBoost Performance with SMOTE, ADASYN, and GNUS Under Varying Imbalance Levels.


📕 Paper: https://www.mdpi.com/2227-7080/13/3/88

🔥 Dataset: https://www.kaggle.com/code/rinichristy/customer-churn-prediction-2020

@Machine_learn
DeepSeek-Coder

DeepSeek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and an extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, DeepSeek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.

Creator: Deepseek-AI
Stars ⭐️: 15.6k
Forked by: 1.5k

Github Repo:
https://github.com/deepseek-ai/DeepSeek-Coder

@Machine_learn
Full PyTorch Implementation of
Compressive Transformer


📚 Link


@Machine_learn
2025/07/03 05:13:20
Back to Top
HTML Embed Code: