LLM Zoo: democratizing ChatGPT
Model "Phoenix", achieving competitive performance among open-source English and Chinese models while excelling in languages with limited resources
🖥 Github: https://github.com/freedomintelligence/llmzoo
⏩ Paper: https://arxiv.org/abs/2304.10453v1
⭐️ Parameters: https://huggingface.co/FreedomIntelligence/phoenix-chat-7b
@Machine_learn
Model "Phoenix", achieving competitive performance among open-source English and Chinese models while excelling in languages with limited resources
🖥 Github: https://github.com/freedomintelligence/llmzoo
⏩ Paper: https://arxiv.org/abs/2304.10453v1
⭐️ Parameters: https://huggingface.co/FreedomIntelligence/phoenix-chat-7b
@Machine_learn
Transfer Knowledge from Head to Tail: Uncertainty Calibration under Long-tailed Distribution
🖥 Github: https://github.com/jiahaochen1/calibration
⏩ Paper: https://arxiv.org/pdf/2304.06537v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/cifar-10
@Machine_learn
🖥 Github: https://github.com/jiahaochen1/calibration
⏩ Paper: https://arxiv.org/pdf/2304.06537v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/cifar-10
@Machine_learn
Count anything
An empirical study on few-shot counting using segment anything
🖥 Github: https://github.com/vision-intelligence-and-robots-group/count-anything
⏩ Paper: https://arxiv.org/abs/2304.10817v1
🤗 Hugging face: https://huggingface.co/spaces/nebula/counting-anything
📌 Dataset: https://drive.google.com/file/d/1ymDYrGs9DSRicfZbSCDiOu0ikGDh5k6S/view?usp=sharing
@Machine_learn
An empirical study on few-shot counting using segment anything
🖥 Github: https://github.com/vision-intelligence-and-robots-group/count-anything
⏩ Paper: https://arxiv.org/abs/2304.10817v1
🤗 Hugging face: https://huggingface.co/spaces/nebula/counting-anything
📌 Dataset: https://drive.google.com/file/d/1ymDYrGs9DSRicfZbSCDiOu0ikGDh5k6S/view?usp=sharing
@Machine_learn
Improving Segmentation of Objects with Varying Sizes in Biomedical Images using Instance-wise and Center-of-Instance Segmentation Loss Function
🖥 Github: https://github.com/brainimageanalysis/ici-loss
⏩ Paper: https://arxiv.org/pdf/2304.06229v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/atlas-v2-0
@Machine_learn
🖥 Github: https://github.com/brainimageanalysis/ici-loss
⏩ Paper: https://arxiv.org/pdf/2304.06229v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/atlas-v2-0
@Machine_learn
Collaborative Diffusion for Multi-Modal Face Generation and Editing
🖥 Github: https://github.com/ziqihuangg/collaborative-diffusion
⏩ Project: https://ziqihuangg.github.io/projects/collaborative-diffusion.html
⏩ Paper: https://arxiv.org/abs/2304.10530v1
⭐️ Dataset: https://paperswithcode.com/dataset/celeba-dialog
@Machine_learn
🖥 Github: https://github.com/ziqihuangg/collaborative-diffusion
⏩ Project: https://ziqihuangg.github.io/projects/collaborative-diffusion.html
⏩ Paper: https://arxiv.org/abs/2304.10530v1
⭐️ Dataset: https://paperswithcode.com/dataset/celeba-dialog
@Machine_learn
Packt.Applied.Machine.Learning.and.High.Performance.pdf
20.5 MB
Book: Applied Machine Learning and High-Performance Computing on AWS
Authors: Mani Khanuja, Farooq Sabir,
Shreyas Subramanian ,Trenton Potgieter
ISBN: 978-1-80323-701-5
year: 2022
pages: 383
Tags: #ML #AWS
@Machine_learn
Authors: Mani Khanuja, Farooq Sabir,
Shreyas Subramanian ,Trenton Potgieter
ISBN: 978-1-80323-701-5
year: 2022
pages: 383
Tags: #ML #AWS
@Machine_learn
🔍 Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System
Self-Controlled Memory (SCM) system to unleash infinite-length input capacity for large-scale language models.
🖥 Github: https://github.com/toufunao/SCM4LLMs
⏩ Paper: https://arxiv.org/abs/2304.13343v1
📌 Tasks: https://paperswithcode.com/task/language-modelling
@Machine_learn
Self-Controlled Memory (SCM) system to unleash infinite-length input capacity for large-scale language models.
🖥 Github: https://github.com/toufunao/SCM4LLMs
⏩ Paper: https://arxiv.org/abs/2304.13343v1
📌 Tasks: https://paperswithcode.com/task/language-modelling
@Machine_learn
ZipIt! Merging Models from Different Tasks without Training
ZipIt allows to combine completely distinct models with different initializations, each solving a separate task, into one multi-task model without any additional training.
🖥 Github: https://github.com/gstoica27/zipit
⏩ Paper: https://arxiv.org/abs/2305.03053v1
📌 Dataset: https://paperswithcode.com/dataset/nabirds
@Machine_learn
ZipIt allows to combine completely distinct models with different initializations, each solving a separate task, into one multi-task model without any additional training.
🖥 Github: https://github.com/gstoica27/zipit
⏩ Paper: https://arxiv.org/abs/2305.03053v1
📌 Dataset: https://paperswithcode.com/dataset/nabirds
@Machine_learn
pymbook.pdf
1.1 MB
Book: Python for you and me
Release 0.5.beta1
Authors: Kushal Das
ISBN: Null
year: 2023
pages: 175
Tags: #Python #Code
@Machine_learn
Release 0.5.beta1
Authors: Kushal Das
ISBN: Null
year: 2023
pages: 175
Tags: #Python #Code
@Machine_learn
Discover and Cure: Concept-aware Mitigation of Spurious Correlation
🖥 Github: https://github.com/wuyxin/disc
⏩ Paper: https://arxiv.org/pdf/2305.00650v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/metashift
@Machine_learn
🖥 Github: https://github.com/wuyxin/disc
⏩ Paper: https://arxiv.org/pdf/2305.00650v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/metashift
@Machine_learn
Segment Any Anomaly without Training via Hybrid Prompt Regularization
🖥 Github: https://github.com/caoyunkang/segment-any-anomaly
🖥 Colab: https://colab.research.google.com/drive/1Rwio_KfziuLp79Qh_ugum64Hjnq4ZwsE?usp=sharing
⏩ Paper: https://arxiv.org/abs/2305.11013v1
📌 Dataset: https://paperswithcode.com/dataset/visa
@Machine_learn
🖥 Github: https://github.com/caoyunkang/segment-any-anomaly
🖥 Colab: https://colab.research.google.com/drive/1Rwio_KfziuLp79Qh_ugum64Hjnq4ZwsE?usp=sharing
⏩ Paper: https://arxiv.org/abs/2305.11013v1
📌 Dataset: https://paperswithcode.com/dataset/visa
@Machine_learn
Deep-Learning-for-Natural-Language-Processing.pdf
7.3 MB
Book: Deep Learning for Natural Language Processing (Creating Neural Networks with Python)
Authors: Palash Goyal, Sumit Pandey, Karan Jain
ISBN: 978-1-4842-3685-7
year: 2018
pages: 290
Tags: #NLP #DL #Python #Code
@Machine_learn
Authors: Palash Goyal, Sumit Pandey, Karan Jain
ISBN: 978-1-4842-3685-7
year: 2018
pages: 290
Tags: #NLP #DL #Python #Code
@Machine_learn
LLM-Pruner: On the Structural Pruning of Large Language Models
Compress your LLMs to any size;
🖥 Github: https://github.com/horseee/llm-pruner
⏩ Paper: https://arxiv.org/abs/2305.11627v1
📌 Dataset: https://paperswithcode.com/dataset/piqa
@Machine_learn
Compress your LLMs to any size;
🖥 Github: https://github.com/horseee/llm-pruner
⏩ Paper: https://arxiv.org/abs/2305.11627v1
📌 Dataset: https://paperswithcode.com/dataset/piqa
@Machine_learn
QLoRA: Efficient Finetuning of Quantized LLMs
Model name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.
🖥 Github: https://github.com/artidoro/qlora
⏩ Paper: https://arxiv.org/abs/2305.14314
⭐️ Demo: https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi
📌 Dataset: https://paperswithcode.com/dataset/ffhq
@Machine_learn
Model name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.
🖥 Github: https://github.com/artidoro/qlora
⏩ Paper: https://arxiv.org/abs/2305.14314
⭐️ Demo: https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi
📌 Dataset: https://paperswithcode.com/dataset/ffhq
@Machine_learn
Hybrid and Collaborative Passage Reranking
🖥 Github: https://github.com/zmzhang2000/hybrank
⏩ Paper: https://arxiv.org/pdf/2305.09313v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/natural-questions
@Machine_learn
🖥 Github: https://github.com/zmzhang2000/hybrank
⏩ Paper: https://arxiv.org/pdf/2305.09313v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/natural-questions
@Machine_learn
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
Hiera is a hierarchical vision transformer that is fast, powerful, and, above all, simple. It outperforms the state-of-the-art across a wide array of image and video tasks while being much faster.
🖥 Github: https://github.com/facebookresearch/hiera
⏩ Paper: https://arxiv.org/abs/2306.00989v1
📌 Dataset: https://paperswithcode.com/dataset/inaturalist
@Machine_learn
Hiera is a hierarchical vision transformer that is fast, powerful, and, above all, simple. It outperforms the state-of-the-art across a wide array of image and video tasks while being much faster.
pip install hiera-transformer
🖥 Github: https://github.com/facebookresearch/hiera
⏩ Paper: https://arxiv.org/abs/2306.00989v1
📌 Dataset: https://paperswithcode.com/dataset/inaturalist
@Machine_learn
با عرض سلام پکیچ های یادگیری ماشین و یادگیری عمیق رو برای دوستانی که نیاز دارن تخفیف۵۰٪ گذاشتیم در صورت نیاز به بنده اطلاع بدین.
@Raminmousa
@Raminmousa