This channels is for Programmers, Coders, Software Engineers.
0οΈβ£ Python
1οΈβ£ Data Science
2οΈβ£ Machine Learning
3οΈβ£ Data Visualization
4οΈβ£ Artificial Intelligence
5οΈβ£ Data Analysis
6οΈβ£ Statistics
7οΈβ£ Deep Learning
8οΈβ£ programming Languages
β
https://www.tg-me.com/addlist/8_rRW2scgfRhOTc0
β
https://www.tg-me.com/codeprogrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
Telegram
Data science
Youβve been invited to add the folder βData scienceβ, which includes 15 chats.
Click-Calib: A Robust Extrinsic Calibration Method for Surround-View Systems
Surround-View System (SVS) is an essential component in Advanced Driver Assistance System (ADAS) and requires precise calibrations.
Paper: https://arxiv.org/pdf/2501.01557v2.pdf
Code: https://github.com/lwangvaleo/click_calib
Dataset: WoodScape
@Machine_learn
Surround-View System (SVS) is an essential component in Advanced Driver Assistance System (ADAS) and requires precise calibrations.
Paper: https://arxiv.org/pdf/2501.01557v2.pdf
Code: https://github.com/lwangvaleo/click_calib
Dataset: WoodScape
@Machine_learn
π Deep Generative Models for Therapeutic Peptide Discovery: A Comprehensive Review
π Study the paper
@Machine_learn
π Study the paper
@Machine_learn
Free access to our secret channels β
π Free Data Science Books
π¨βπ» Programming Handwritten Notes
π Python Free Courses
π€ Learn AI with ChatGPT
π Data Science Projects
π©βπ Coding Projects
π Free Coding Certified Courses
πͺ Quiz and Job Opportunities
And Many More......
β Join now : https://www.tg-me.com/machinelearning_deeplearning
β Data Science & AI Jobs
Join fast before I delete the link β€οΈ
π Free Data Science Books
π¨βπ» Programming Handwritten Notes
π Python Free Courses
π€ Learn AI with ChatGPT
π Data Science Projects
π©βπ Coding Projects
π Free Coding Certified Courses
πͺ Quiz and Job Opportunities
And Many More......
β Join now : https://www.tg-me.com/machinelearning_deeplearning
β Data Science & AI Jobs
Join fast before I delete the link β€οΈ
Telegram
Artificial Intelligence
π° Machine Learning & Artificial Intelligence Free Resources
π° Learn Data Science, Deep Learning, Python with Tensorflow, Keras & many more
Admin: @coderfun
Buy ads: https://telega.io/c/machinelearning_deeplearning
π° Learn Data Science, Deep Learning, Python with Tensorflow, Keras & many more
Admin: @coderfun
Buy ads: https://telega.io/c/machinelearning_deeplearning
πA Survey of Genetic Programming Applications in Modern Biological Research
π Study the paper
@Machine_learn
π Study the paper
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
SmolVLM -
Model: https://huggingface.co/collections/HuggingFaceTB/smolvlm-256m-and-500m-6791fafc5bb0ab8acc960fb0
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
π Perspectives on Computational Enzyme Modeling: From Mechanisms to Design and Drug Development
π Study the paper
@Machine_learn
π Study the paper
@Machine_learn
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.
Paper: https://arxiv.org/pdf/2411.07975v1.pdf
Code: https://github.com/deepseek-ai/janus
Datasets: GQA MMBench MM-Vet SEED-Bench
@Machine_learn
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.
Paper: https://arxiv.org/pdf/2411.07975v1.pdf
Code: https://github.com/deepseek-ai/janus
Datasets: GQA MMBench MM-Vet SEED-Bench
@Machine_learn
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Paper submitted by #DeepSeek team has generated significant attention in the AI community.
This work addresses the enhancement of reasoning capabilities in Large Language Models (LLMs) through the application of reinforcement learning techniques. The authors introduce a novel framework, DeepSeek-R1, which aims to improve LLM reasoning abilities by incorporating incentives for logical reasoning processes within their training. This integration of reinforcement learning allows LLMs to go beyond basic linguistic processing, developing sophisticated reasoning methods that can boost performance across a wide array of complex applications.
This approach has cause lots of discussions in different communities, but it definitely opens up the whole new direction of development for the research.
Paper: https://arxiv.org/abs/2501.12948
#nn #LLM
@Machine_learn
Paper submitted by #DeepSeek team has generated significant attention in the AI community.
This work addresses the enhancement of reasoning capabilities in Large Language Models (LLMs) through the application of reinforcement learning techniques. The authors introduce a novel framework, DeepSeek-R1, which aims to improve LLM reasoning abilities by incorporating incentives for logical reasoning processes within their training. This integration of reinforcement learning allows LLMs to go beyond basic linguistic processing, developing sophisticated reasoning methods that can boost performance across a wide array of complex applications.
This approach has cause lots of discussions in different communities, but it definitely opens up the whole new direction of development for the research.
Paper: https://arxiv.org/abs/2501.12948
#nn #LLM
@Machine_learn
arXiv.org
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via...
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning...