Tech Stock Stories: The Pandemic Winners
As we breath a collective sigh of relief that the COVID-19 pandemic is behind us, it’s interesting to think about the winners and losers of this unique period of living memory. Judging by the stock performance of US tech giants, I’d say their shareholders did rather well while the rest of the world switched to remote work and spent all our time indoors. Please enjoy my Tableau Viz below, which is a candle chart of the stock performance of US tech giants during the pre- and post-pandemic period. We can see the month average highs, lows, opens and closes, and you can pick the company of interest from the drop down list on the right.
What is Analytics Engineering?
The well-known roles of the data team are Data Analyst/Scientist and Data Engineer. Yet, in recent years, there has been a growing demand for data-driven decision making to be distributed throughout the organisation, with a traditional data team at risk of becoming a bottle neck as the growing need for insights and analytics cannot be fully met. With this increased demand has come the emergence of a new role - the Analytics Engineer - for the typical data team to better encompass these evolving responsibilities, and specialist technologies. This evolution has led to a paradigm shift in data processes from Extract-Transform-Load (ETL) to Extract-Load-Transform (ETL), which facilitates a more self-service oriented Business Intelligence (BI) operating model where data analysts and data scientists can be more embedded with domain teams.
Round Peg in a Square Hole
Introduction # This post is in a way part 1 of 2 because it features the exploratory analysis of a self-generated data set. Let me give you the background quickly, because I will go deeper with the next post: I have a really chronic problem with insomnia, since I was a teenager. I had a sense that certain foods were triggering bad nights’ sleep so I kept a food diary of what I’d eaten for dinner and recorded my quality sleep hours with a fitness tracker for a year.
Data Pipelines For Machine Learning
Data pipelines are essential for machine learning projects as they help to manage the flow of data from various sources, ensure data quality, and automate the process of data preparation. What’s the end goal for a data pipeline? The resulting clean and processed data may be used for analysis, business intelligence and reporting. Another common use these days is as the input for Machine Learning (ML). While data pipelines are often hand-coded in high level programming languages such as Java, there are plenty of configurable (point-and-click) tools available to do this task. One benefit of using such tools is that many of them have built-in components for completing the ML tasks of feature engineering, training, evaluating and deploying ML models. In this blog post, I will compare and contrast SAS, KNIME, Alteryx, and RapidMiner, which I have used extensively.
Data Pipelines and Data Engineers
Data pipelines are critical for ensuring that data is accurate, timely, and available to those who need it. Without data pipelines, organizations would struggle to process and make sense of the vast amounts of data that they collect. Data pipelines enable organizations to build machine learning models, conduct data analysis, and make data-driven decisions. In this blog post, we will discuss data pipelines and the role of data engineers in building and maintaining them.