Create your own roadmap to succeed as a Data Engineer. ๐
โถ๏ธIn the ever-evolving field of data engineering, staying up-to-date with the latest technologies and best practices is crucial with industries relying heavily on data-driven decision-making.
๐As we approach 2024, the field of data engineering continues to evolve, with new challenges and opportunities with the following key pointers:
๐Programming languages: Python, Scala and Java are few most popular programming languages for data engineers.
๐Databases: SQL or NoSQL databases such as Server, MySQL, and PostgreSQL, MongoDB, Cassandra are few popular databases.
๐Data modeling: The process of creating a blueprint for a database, it helps to ensure that the database is designed to meet the needs of the business.
๐Cloud computing: AWS, Azure, and GCP are the three major cloud computing platforms that can be used to build and deploy data engineering solutions.
๐Big data technologies: Apache Spark, Kafka, Beam and Hadoop are some of the most popular big data technologies to process and analyze large datasets.
๐Data warehousing: Snowflake, Databricks, BigQuery and Redshift are popular data warehousing platforms used to store and analyze large datasets for business intelligence purposes.
๐Data streaming: Apache Kafka and Spark are popular data streaming platform used to process and analyze data in real time.
๐Data lakes and data meshes: The two emerging data management architectures, Data lakes are centralized repositories for all types of data, while data meshes are decentralized architectures that distribute data across multiple locations.
๐Orchestraction: Pipelines are orchestrated using tools like Airflow, Dagster, Mage or similar other tools to schedule and monitor workflows.
๐Data quality, data observability, and data governance: Ensuring reliability and trustworthiness of data quality helps to keep data accurate, complete, and consistent. Data observability helps to monitor and understand data systems. Data governance is the process of establishing policies and procedures for managing data.
๐Data visualization: Tableau, Power BI, and Looker are three popular data visualization tools to create charts and graphs that can be used to communicate data insights to stakeholders.
๐DevOps and data ops: Two set of practices used to automate and streamline the development and deployment of data engineering solutions.
๐ฐDevelop good communication and collaboration skills is equally important to understand the business aspects of data engineering, such as project management and stakeholder engagement.
โ๏ธStay updated and relevant with emerging trends like AI/ML, and IOT used to develop intelligent data pipelines and data warehouses.
โ Data engineers who want to be successful in 2023-2024 and beyond should focus on developing their skills and experience in the areas listed above.
โถ๏ธIn the ever-evolving field of data engineering, staying up-to-date with the latest technologies and best practices is crucial with industries relying heavily on data-driven decision-making.
๐As we approach 2024, the field of data engineering continues to evolve, with new challenges and opportunities with the following key pointers:
๐Programming languages: Python, Scala and Java are few most popular programming languages for data engineers.
๐Databases: SQL or NoSQL databases such as Server, MySQL, and PostgreSQL, MongoDB, Cassandra are few popular databases.
๐Data modeling: The process of creating a blueprint for a database, it helps to ensure that the database is designed to meet the needs of the business.
๐Cloud computing: AWS, Azure, and GCP are the three major cloud computing platforms that can be used to build and deploy data engineering solutions.
๐Big data technologies: Apache Spark, Kafka, Beam and Hadoop are some of the most popular big data technologies to process and analyze large datasets.
๐Data warehousing: Snowflake, Databricks, BigQuery and Redshift are popular data warehousing platforms used to store and analyze large datasets for business intelligence purposes.
๐Data streaming: Apache Kafka and Spark are popular data streaming platform used to process and analyze data in real time.
๐Data lakes and data meshes: The two emerging data management architectures, Data lakes are centralized repositories for all types of data, while data meshes are decentralized architectures that distribute data across multiple locations.
๐Orchestraction: Pipelines are orchestrated using tools like Airflow, Dagster, Mage or similar other tools to schedule and monitor workflows.
๐Data quality, data observability, and data governance: Ensuring reliability and trustworthiness of data quality helps to keep data accurate, complete, and consistent. Data observability helps to monitor and understand data systems. Data governance is the process of establishing policies and procedures for managing data.
๐Data visualization: Tableau, Power BI, and Looker are three popular data visualization tools to create charts and graphs that can be used to communicate data insights to stakeholders.
๐DevOps and data ops: Two set of practices used to automate and streamline the development and deployment of data engineering solutions.
๐ฐDevelop good communication and collaboration skills is equally important to understand the business aspects of data engineering, such as project management and stakeholder engagement.
โ๏ธStay updated and relevant with emerging trends like AI/ML, and IOT used to develop intelligent data pipelines and data warehouses.
โ Data engineers who want to be successful in 2023-2024 and beyond should focus on developing their skills and experience in the areas listed above.
Data Science and Machine Learning Projects with source code
This repository contains articles, GitHub repos and Kaggle kernels which provides data science and machine learning projects with code.
Creator: Durgesh Samariya
Stars โญ๏ธ: 125
Forked By: 34
https://github.com/durgeshsamariya/Data-Science-Machine-Learning-Project-with-Source-Code
#machine #learning #datascience
โโโโโโโโโโโโโโ
Join @datascience_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
This repository contains articles, GitHub repos and Kaggle kernels which provides data science and machine learning projects with code.
Creator: Durgesh Samariya
Stars โญ๏ธ: 125
Forked By: 34
https://github.com/durgeshsamariya/Data-Science-Machine-Learning-Project-with-Source-Code
#machine #learning #datascience
โโโโโโโโโโโโโโ
Join @datascience_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
GitHub
GitHub - durgeshsamariya/Data-Science-Machine-Learning-Project-with-Source-Code: Data Science and Machine Learning projects withโฆ
Data Science and Machine Learning projects with source code. - durgeshsamariya/Data-Science-Machine-Learning-Project-with-Source-Code
What is Data Science ?
If you have absolutely no idea what Data Science is and are looking for a very quick non-technical introduction to Data Science , this course will help you get started on fundamental concepts underlying Data Science.
If you are an experienced Data Science professional, attending this course will give you some idea of how to explain your profession to an absolute lay person.
Rating โญ๏ธ: 4.2 out 5
Students ๐จโ๐ : 24,071
Duration โฐ : 40min of on-demand video
Created by ๐จโ๐ซ: Gopinath Ramakrishnan
๐ Course Link
#datascience #data_science
โโโโโโโโโโโโโโ
๐Join @datascience_bds for more๐
If you have absolutely no idea what Data Science is and are looking for a very quick non-technical introduction to Data Science , this course will help you get started on fundamental concepts underlying Data Science.
If you are an experienced Data Science professional, attending this course will give you some idea of how to explain your profession to an absolute lay person.
Rating โญ๏ธ: 4.2 out 5
Students ๐จโ๐ : 24,071
Duration โฐ : 40min of on-demand video
Created by ๐จโ๐ซ: Gopinath Ramakrishnan
๐ Course Link
#datascience #data_science
โโโโโโโโโโโโโโ
๐Join @datascience_bds for more๐
Udemy
Free Data Science Tutorial - What is Data Science ?
Fundamental Concepts for Beginners - Free Course
In Data Science you can find multiple data distributions...
But where are they typically found?
Check examples of 4 common distributions:
1๏ธโฃ Normal Distribution:
Often found in natural and social phenomena where many factors contribute to an outcome. Examples include heights of adults in a population, test scores, measurement errors, and blood pressure readings.
2๏ธโฃ Uniform Distribution:
This appears when every outcome in a range is equally likely. Examples include rolling a fair die (each number has an equal chance of appearing) and selecting a random number within a fixed range.
3๏ธโฃ Binomial Distribution:
Used when you're dealing with a fixed number of trials or experiments, each of which has only two possible outcomes (success or failure), like flipping a coin a set number of times, or the number of defective items in a batch.
4๏ธโฃ Poisson Distribution:
Common in scenarios where you're counting the number of times an event happens over a specific interval of time or space. Examples include the number of phone calls received by a call centre in an hour or the probability of taxi frequency.
Each distribution offers insights into the underlying processes of the data and is useful for different kinds of statistical analysis and prediction.
But where are they typically found?
Check examples of 4 common distributions:
1๏ธโฃ Normal Distribution:
Often found in natural and social phenomena where many factors contribute to an outcome. Examples include heights of adults in a population, test scores, measurement errors, and blood pressure readings.
2๏ธโฃ Uniform Distribution:
This appears when every outcome in a range is equally likely. Examples include rolling a fair die (each number has an equal chance of appearing) and selecting a random number within a fixed range.
3๏ธโฃ Binomial Distribution:
Used when you're dealing with a fixed number of trials or experiments, each of which has only two possible outcomes (success or failure), like flipping a coin a set number of times, or the number of defective items in a batch.
4๏ธโฃ Poisson Distribution:
Common in scenarios where you're counting the number of times an event happens over a specific interval of time or space. Examples include the number of phone calls received by a call centre in an hour or the probability of taxi frequency.
Each distribution offers insights into the underlying processes of the data and is useful for different kinds of statistical analysis and prediction.
Data Analytics and Hypothesis Testing.pdf
1.9 MB
Data Analytics and Hypothesis Testing
Neural Networks and Deep Learning
Neural networks and deep learning are integral parts of artificial intelligence (AI) and machine learning (ML). Here's an overview:
1.Neural Networks: Neural networks are computational models inspired by the human brain's structure and functioning. They consist of interconnected nodes (neurons) organized in layers: input layer, hidden layers, and output layer.
Each neuron receives input, processes it through an activation function, and passes the output to the next layer. Neurons in subsequent layers perform more complex computations based on previous layers' outputs.
Neural networks learn by adjusting weights and biases associated with connections between neurons through a process called training. This is typically done using optimization techniques like gradient descent and backpropagation.
2.Deep Learning : Deep learning is a subset of ML that uses neural networks with multiple layers (hence the term "deep"), allowing them to learn hierarchical representations of data.
These networks can automatically discover patterns, features, and representations in raw data, making them powerful for tasks like image recognition, natural language processing (NLP), speech recognition, and more.
Deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), and Transformer models have demonstrated exceptional performance in various domains.
3.Applications Computer Vision: Object detection, image classification, facial recognition, etc., leveraging CNNs.
Natural Language Processing (NLP) Language translation, sentiment analysis, chatbots, etc., utilizing RNNs, LSTMs, and Transformers.
Speech Recognition: Speech-to-text systems using deep neural networks.
4.Challenges and Advancements: Training deep neural networks often requires large amounts of data and computational resources. Techniques like transfer learning, regularization, and optimization algorithms aim to address these challenges.
LAdvancements in hardware (GPUs, TPUs), algorithms (improved architectures like GANs - Generative Adversarial Networks), and techniques (attention mechanisms) have significantly contributed to the success of deep learning.
5. Frameworks and Libraries: There are various open-source libraries and frameworks (TensorFlow, PyTorch, Keras, etc.) that provide tools and APIs for building, training, and deploying neural networks and deep learning models.
Neural networks and deep learning are integral parts of artificial intelligence (AI) and machine learning (ML). Here's an overview:
1.Neural Networks: Neural networks are computational models inspired by the human brain's structure and functioning. They consist of interconnected nodes (neurons) organized in layers: input layer, hidden layers, and output layer.
Each neuron receives input, processes it through an activation function, and passes the output to the next layer. Neurons in subsequent layers perform more complex computations based on previous layers' outputs.
Neural networks learn by adjusting weights and biases associated with connections between neurons through a process called training. This is typically done using optimization techniques like gradient descent and backpropagation.
2.Deep Learning : Deep learning is a subset of ML that uses neural networks with multiple layers (hence the term "deep"), allowing them to learn hierarchical representations of data.
These networks can automatically discover patterns, features, and representations in raw data, making them powerful for tasks like image recognition, natural language processing (NLP), speech recognition, and more.
Deep learning architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), and Transformer models have demonstrated exceptional performance in various domains.
3.Applications Computer Vision: Object detection, image classification, facial recognition, etc., leveraging CNNs.
Natural Language Processing (NLP) Language translation, sentiment analysis, chatbots, etc., utilizing RNNs, LSTMs, and Transformers.
Speech Recognition: Speech-to-text systems using deep neural networks.
4.Challenges and Advancements: Training deep neural networks often requires large amounts of data and computational resources. Techniques like transfer learning, regularization, and optimization algorithms aim to address these challenges.
LAdvancements in hardware (GPUs, TPUs), algorithms (improved architectures like GANs - Generative Adversarial Networks), and techniques (attention mechanisms) have significantly contributed to the success of deep learning.
5. Frameworks and Libraries: There are various open-source libraries and frameworks (TensorFlow, PyTorch, Keras, etc.) that provide tools and APIs for building, training, and deploying neural networks and deep learning models.
Data Science Interview Questions.pdf
1.8 MB
Data Science Interview Questions
transaction-fraud-detection
A data science project to predict whether a transaction is a fraud or not.
Creator: juniorcl
Stars โญ๏ธ: 103
Forked By: 53
https://github.com/juniorcl/transaction-fraud-detection
#machine #learning #datascience
โโโโโโโโโโโโโโ
Join @datascience_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
A data science project to predict whether a transaction is a fraud or not.
Creator: juniorcl
Stars โญ๏ธ: 103
Forked By: 53
https://github.com/juniorcl/transaction-fraud-detection
#machine #learning #datascience
โโโโโโโโโโโโโโ
Join @datascience_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
GitHub
GitHub - juniorcl/transaction-fraud-detection: A data science project to predict whether a transaction is a fraud or not.
A data science project to predict whether a transaction is a fraud or not. - juniorcl/transaction-fraud-detection
Learn Data Cleaning with Python
Perform Data Cleaning Techniques with the Python Programming Language. Practice and Solution Notebooks included.
Rating โญ๏ธ: 4.1 out 5
Students ๐จโ๐ : 10,171
Duration โฐ : 50min of on-demand video
Created by ๐จโ๐ซ: Valentine Mwangi
๐ Course Link
#datascience #data_cleaning #python
โโโโโโโโโโโโโโ
๐Join @datascience_bds for more๐
Perform Data Cleaning Techniques with the Python Programming Language. Practice and Solution Notebooks included.
Rating โญ๏ธ: 4.1 out 5
Students ๐จโ๐ : 10,171
Duration โฐ : 50min of on-demand video
Created by ๐จโ๐ซ: Valentine Mwangi
๐ Course Link
#datascience #data_cleaning #python
โโโโโโโโโโโโโโ
๐Join @datascience_bds for more๐
Udemy
Free Data Science Tutorial - Learn Data Cleaning with Python
Perform Data Cleaning Techniques with the Python Programming Language. Practice and Solution Notebooks included. - Free Course
Machine Intelligence - an Introductory Course
Learn the cutting-edge Algorithms in the field of Machine Learning, Deep Learning, Artificial Intelligence, and more!
Rating โญ๏ธ: 4.1 out 5
Students ๐จโ๐ : 14,063
Duration โฐ : 40min of on-demand video
Created by ๐จโ๐ซ: Taimur Zahid
๐ Course Link
#datascience #machinelearning
โโโโโโโโโโโโโโ
๐Join @datascience_bds for more๐
Learn the cutting-edge Algorithms in the field of Machine Learning, Deep Learning, Artificial Intelligence, and more!
Rating โญ๏ธ: 4.1 out 5
Students ๐จโ๐ : 14,063
Duration โฐ : 40min of on-demand video
Created by ๐จโ๐ซ: Taimur Zahid
๐ Course Link
#datascience #machinelearning
โโโโโโโโโโโโโโ
๐Join @datascience_bds for more๐
Udemy
Free Artificial Intelligence (AI) Tutorial - Machine Intelligence - an Introductory Course
Learn the cutting-edge Algorithms in the field of Machine Learning, Deep Learning, Artificial Intelligence, and more! - Free Course
Deep Learning CNN Project.pdf
3.8 MB
๐ Deep Learning CNN Project: Cat vs Dog Classification
๐ Key Highlights:
๐ธ 25,000 training images, 12,500 testing images
๐ง Custom fully connected layers
โก๏ธ Binary Cross-Entropy loss function
โ๏ธ Exponential decay and learning rate schedule
๐ Tools & Libraries:
๐ TensorFlow & Keras
๐ NumPy, OpenCV, Matplotlib
๐ Learning rate scheduling
๐ Key Highlights:
๐ธ 25,000 training images, 12,500 testing images
๐ง Custom fully connected layers
โก๏ธ Binary Cross-Entropy loss function
โ๏ธ Exponential decay and learning rate schedule
๐ Tools & Libraries:
๐ TensorFlow & Keras
๐ NumPy, OpenCV, Matplotlib
๐ Learning rate scheduling
๐๐ฎ๐๐ฎ ๐ฃ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด
๐๐ฎ๐๐ฎ ๐ฃ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด is an indispensable stage in the data science workflow, crucial for the success of downstream processes such as analytics and machine learning modeling. It involves a comprehensive set of operations that prepare raw data for further processing and analysis. This stage is fundamental because it directly impacts the quality of insights derived from the data and the performance of predictive models.
๐ง๐ต๐ฒ ๐ถ๐บ๐ฝ๐ผ๐ฟ๐๐ฎ๐ป๐ฐ๐ฒ ๐ผ๐ณ ๐ฑ๐ฎ๐๐ฎ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด stems from the fact that real-world data is often incomplete, inconsistent, and lacking in certain behaviors or trends. It may contain errors, outliers, or noise that can significantly distort results and lead to misleading conclusions.
๐ง๐ต๐ฒ๐ฟ๐ฒ๐ณ๐ผ๐ฟ๐ฒ, preprocessing aims to clean and organize the data, enhancing its quality and making it more suitable for analysis.
๐ Iโve compiled the following list which includes ๐ผ๐๐ฒ๐ฟ ๐ฎ ๐ญ๐ฑ๐ฌ ๐ฒ๐๐๐ฒ๐ป๐๐ถ๐ฎ๐น ๐ฑ๐ฎ๐๐ฎ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด ๐ผ๐ฝ๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป๐, ranging from basic data cleaning techniques like handling missing values and outliers to more advanced procedures like ๐ณ๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐ฒ๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด, ๐ต๐ฎ๐ป๐ฑ๐น๐ถ๐ป๐ด ๐ถ๐บ๐ฏ๐ฎ๐น๐ฎ๐ป๐ฐ๐ฒ๐ฑ ๐ฑ๐ฎ๐๐ฎ๐๐ฒ๐๐, ๐ฎ๐ป๐ฑ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด ๐ณ๐ผ๐ฟ ๐๐ฝ๐ฒ๐ฐ๐ถ๐ณ๐ถ๐ฐ ๐ฑ๐ฎ๐๐ฎ ๐๐๐ฝ๐ฒ๐ ๐น๐ถ๐ธ๐ฒ ๐๐ฒ๐ ๐ ๐ฎ๐ป๐ฑ ๐ถ๐บ๐ฎ๐ด๐ฒ๐.
Mastery of these techniques is crucial for anyone looking to delve into data science, as they lay the groundwork for all subsequent steps in the data analysis and machine learning pipeline.
๐๐ฎ๐๐ฎ ๐ฃ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด is an indispensable stage in the data science workflow, crucial for the success of downstream processes such as analytics and machine learning modeling. It involves a comprehensive set of operations that prepare raw data for further processing and analysis. This stage is fundamental because it directly impacts the quality of insights derived from the data and the performance of predictive models.
๐ง๐ต๐ฒ ๐ถ๐บ๐ฝ๐ผ๐ฟ๐๐ฎ๐ป๐ฐ๐ฒ ๐ผ๐ณ ๐ฑ๐ฎ๐๐ฎ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด stems from the fact that real-world data is often incomplete, inconsistent, and lacking in certain behaviors or trends. It may contain errors, outliers, or noise that can significantly distort results and lead to misleading conclusions.
๐ง๐ต๐ฒ๐ฟ๐ฒ๐ณ๐ผ๐ฟ๐ฒ, preprocessing aims to clean and organize the data, enhancing its quality and making it more suitable for analysis.
๐ Iโve compiled the following list which includes ๐ผ๐๐ฒ๐ฟ ๐ฎ ๐ญ๐ฑ๐ฌ ๐ฒ๐๐๐ฒ๐ป๐๐ถ๐ฎ๐น ๐ฑ๐ฎ๐๐ฎ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด ๐ผ๐ฝ๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป๐, ranging from basic data cleaning techniques like handling missing values and outliers to more advanced procedures like ๐ณ๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐ฒ๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด, ๐ต๐ฎ๐ป๐ฑ๐น๐ถ๐ป๐ด ๐ถ๐บ๐ฏ๐ฎ๐น๐ฎ๐ป๐ฐ๐ฒ๐ฑ ๐ฑ๐ฎ๐๐ฎ๐๐ฒ๐๐, ๐ฎ๐ป๐ฑ ๐ฝ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด ๐ณ๐ผ๐ฟ ๐๐ฝ๐ฒ๐ฐ๐ถ๐ณ๐ถ๐ฐ ๐ฑ๐ฎ๐๐ฎ ๐๐๐ฝ๐ฒ๐ ๐น๐ถ๐ธ๐ฒ ๐๐ฒ๐ ๐ ๐ฎ๐ป๐ฑ ๐ถ๐บ๐ฎ๐ด๐ฒ๐.
Mastery of these techniques is crucial for anyone looking to delve into data science, as they lay the groundwork for all subsequent steps in the data analysis and machine learning pipeline.
Data-Science-Regular-Bootcamp
Regular practice on Data Science, Machien Learning, Deep Learning, Solving ML Project problem, Analytical Issue. Regular boost up my knowledge. The goal is to help learner with learning resource on Data Science filed.
Creator: Sanjoy Kumar Biswas
Stars โญ๏ธ: 68
Forked By: 30
https://github.com/imsanjoykb/Data-Science-Regular-Bootcamp
#machine #learning #datascience
โโโโโโโโโโโโโโ
Join @datascience_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
Regular practice on Data Science, Machien Learning, Deep Learning, Solving ML Project problem, Analytical Issue. Regular boost up my knowledge. The goal is to help learner with learning resource on Data Science filed.
Creator: Sanjoy Kumar Biswas
Stars โญ๏ธ: 68
Forked By: 30
https://github.com/imsanjoykb/Data-Science-Regular-Bootcamp
#machine #learning #datascience
โโโโโโโโโโโโโโ
Join @datascience_bds for more cool repositories.
*This channel belongs to @bigdataspecialist group
GitHub
GitHub - imsanjoykb/Data-Science-Regular-Bootcamp: Regular practice on Data Science, Machien Learning, Deep Learning, Solving MLโฆ
Regular practice on Data Science, Machien Learning, Deep Learning, Solving ML Project problem, Analytical Issue. Regular boost up my knowledge. The goal is to help learner with learning resource on...