This talk was helpful in that it took me back to my "thick client" early days, pre web browser, when my Visual FoxPro stack, ODBC to big servers, was dealing in read-only health procedure data from the Cath Labs and operating rooms (CVOR). My RDBMS tables mapped every artery of every patient affected by stenosis or other coronary pathology. Complications led to procedures and their outcomes, some of which might be the inherited complications of a next procedure and so on.
The scene has changed since those days, as data turned into big data, and as analysis tools started combing over larger server farms, using map-reduce (Hadoop) and a host of Apache projects (Spark, Flink, Kafka). The speaker takes us from those old days to how we do things today, assuming the need to scale up without falling over. How does one deal with the pressure to grow? That's like sails to the wind if you have a seaworthy craft.
The other revolution in cloud native ecology is the growth of containerized microservices ala Docker and Kubernetes. Get a lot of producers and subscribers messaging one another, in response to the streamed data onslaught. Push all the end user rendering cosmetics to the clients, with their web browsers and visualization tools. Customize their dashboards. Some workstations monitor, some upload new data, some report on trends and so forth.
I go around with my little laptop, like a guitar, and strum my sound, these days involving visualizing Flextegrity in a Jupyter Notebook. I learn about cloud native environments from OSCON proposals and Youtubes, and O'Reilly Safari Online when I can afford it. What would be the Python API to Kafka for example?