Interesting topic and great essay.
My key concern is that such a novel manufacturing process would only supersede the current one if it’s either more economic or more valuable to customers. I am uncertain as to whether 3D printing chocolates is any of those.
On the manufacturing side, the current manufacturing process in the chocolate industry is very mature and well-proven. On the demand side, my perception is that it is more a “fun application” (with low repeat sales) than a real demand boosted.
Consequently, I wonder whether chocolate is an industry whether AM could really thrive.
Fascinating topic and very well-written.
The second question you raise I found extremely relevant and even disturbing. In addition to data breaches, I also wonder whether there is a limit in terms of the weapons that could be produced through AM. I have seen elsewhere that it is already possible to “print guns”, expect for a few mechanical components that are really complex. Besides, even without data breaches, probably it’s possible to back-engineer some parts in order to reconstruct the data files.
Putting both points together, my concern about this topic is: what are the safety impacts of AM being applied to weapons and other military equipment, both in a war scenario and in a civil/urban environment?
Fascinating topic and very well-written essay.
This remarkable decision by Tesla reminds me of Volvo’s decision to not patent the invention of the 3-point seat belt. Similarly, the company thought that it was too important an innovation for humankind for a company to profit from it. It probably benefitted Volvo’s brand equity so much that it outweighed their likely loss in sales due to competition.
Probably this is also true about Tesla. Although other companies are increasingly competitive in the EV space, the Tesla brand is certainly unique and, as long as they are able to deliver on their promises, they should incur too much of a problem due to this OI policy.
Fascinating topic and very well-written.
One idea to solve the demand pickup problem that I think is applicable is to use what Muhammad Yunus calls “social business”. Those are enterprises focused on doing social good but at the same time being profitable, in a way that they’re able to reinvest the profits of the business into its growth. The aim is to avoid the need for donations and instead have a self-sustainable social business.
This OI-powered sanitary solution could probably leverage this idea to grow.
Fascinating topic and extremely well-written essay.
First, let me echo how important such a tool is to the healthcare market. In Brazil, for example, it is estimated that around US$ 4 billion are wasted due to frauds or exams that weren’t necessary at all, even if there’s no direct “fraud” involved.
As to your question about detecting new frauds, that is the purpose of the algorithms described as unsupervised machine learning, which essentially try to predict patterns without any prior training phase. It is the case of Google clustering related news even without prior knowledge of what the news would be. Or, more relevant to the topic here, it is the case of a project in Brazil called Operation Love Serenade that uses machine learning to monitor public spending and identify frauds without the ability to train the algorithm with the types of frauds that would be used. They achieve that my letting the algorithm discover what are the outliers or suspicious patterns that are starting to emerge in the data.
Another potential application of this tool for Aetna is identifying combinations of exams that do not make sense (or whose order seem to be wrong). For example, a payer could question a provider as to why Exam B was performed if Exam A is unequivocally more comprehensive and had been done before exam B. That wouldn’t be classified as fraud but is likely to be considered waste.
Fascinating topic and very well-written essay.
It is highly interesting that 2 of the most debated features of ML are absent from, or have low impact in, this healthcare application.
First, the data used to train the algorithm does not seem to include significant biases or errors that could negatively impact the algorithm’s accuracy and reliability. This is done by selecting only the most respected doctors. Second, it is used as a tool and not as a final answer to the questions at hand, which seems to be essential for its long-term sustainability, particularly considering that human lives are involved here.
As to whether it is scalable, my perception is that the current 100% penetration is not impediment for growth. On the contrary, it is a selling point that could be used to expand geographically (e.g internationally) and to other verticals, as cross-selling would be relatively straightforward.
My key concern is indeed the data integration issue. As more features are added to an algorithm, not only some data sets might become unusable due to incompleteness (as you pointed out), but also the retrained algorithm may change its prediction patterns. The risk is that a doctor could use the improved version on an old case and find out that he potentially committed both type I and type II errors (false positives and false negatives), lowering their confidence on the tool.
Fascinating topic and essay.
My concern about Preseries is whether it is able to reach a relevant size: it inherently depends on its revenue model. If they charge a fixed fee per prediction, the service may become widespread, which in turn may actually prevent it from being valuable: if everybody can read the Preseries score, is it providing any real value to the readers? Even if VCs often co-invest, they are still ultimately competing for the best deals, hence a broadly-used service has its usefulness reduced by its potential ubiquity.
On the other hand, if the fee is variable, i.e. higher if a startup ends up being a good investment, this may be more sustainable as a business model. Assuming that VCs would compensate Preseries fairly (instead of trying to cover their use of the score), they most certainly would be willing to pay a substantial amount as long as Preseries is relevant to their scoring process.
In my view, the competitiveness that will result from SmartRide’s entry into the market is bound to be highly beneficial for the population.
First, it will be an additional service, risking to die naturally if it fails to provide any valuable service to the community. Second, it is a collective means of transportation, hence advantageous in comparison to ride hailing companies. Third, it will put pressure on the regulator to watch out for trends and keep/improve the quality of the buses.
Therefore, I see several upsides and admire this project. In case it fails, I am certain that others will then come up with better ideas, as long as the regulator allows innovation to flow.
Interesting topic and excellent article.
My question is whether ML, in this case acting as an “automated insight generation machine”, works best if delivered directly to customers without any human dependence, or with an actual person distilling the insights?
Of course Einstein’s “express insight” system is more consistent with Salesforce’s current business model, but in my past experience (which may be biased given its services nature), a human touch, coupled with sanity checks and a neat packaging, may greatly increase the value being delivered to clients.
After all, physicists are still trying to reconcile Einstein’s general relativity with Quantum Physics. My fear is that this new Einstein may become similarly incompatible with the other pieces of the business puzzle, even if the fundamentals are all perfect.
Excellent article. Interesting and relevant topic, and very well written.
The points where I do not necessarily agree are:
1) Is a more centralized approach really better than the current system where different models are being tested and compared? Similarly to other technologies (or technological applications), in their nascent phase it is often valuable to have a a cone of divergent solutions preceding a convergent cone. My impression is that we are not mature enough to enter the convergent phase.
2) Assuning that it is a supervised learning algorithm, isn’t this approach going to repeat mistakes of the past? For instance, given a training set (i.e. the historical data) that was meaningfully impacted by Quantitative Easing and other hotly-debated monetary practices, is the model going to repeat those measures without questioning if they made sense in the first place? In other words, if there is still no consensus about the economic theory, should there be a model to keep propagating the past?