JP

  • Student

Activity Feed

This is an interesting piece on the MBTA’s Innovation Proposal Program. Compared to some of the other open innovation programs written about for this assignment, the MBTA has done a good job of making the requirements to submit high enough such that the quality of ideas is relatively high. That being said, higher quality ideas are in direct trade-off to both quantity and end user input. People who use the T are using it for a practical purpose and often don’t have the time or energy to submit their grand idea on how to make public transportation better. Advertising the program more widely and using the “often idle” time that users are on the T along with lowering the barriers to entry could help in sourcing more and smaller ideas for the program.

In addition, I wonder whether there is a role for collaboration with schools for this program. For instance, engineering classes might assign a project to design technology that helps the T (i.e. heating coils) and operations classes might assign a project on how to optimize the T schedule. As Boston is such a student and university-driven city, there is a huge amount of free untapped potential that the MBTA should look into.

On November 15, 2018, JP commented on Straight to the Source: Open Innovation at Buzzfeed :

This is a really interesting piece on Buzzfeed as a new source and I learned many things about Buzzfeed’s news program (including that they had Pulitzer finalists!). At the risk of straying into marketing territory and as many have pointed out above, the Buzzfeed brand is associated not with credible news, but with viral and pop culture. I wonder if the very fact that they are so crowdsourced undermines their ability to ever become a credible news source in the eyes of the public. I don’t think that it does, as long as they institute a strong system of internal quality control, but that can be an arduous and labor-intensive process.

In addition, I’m wondering how Buzzfeed can leverage their users or “crowd” for more than just story ideas or tips, but for the actual investigative part of the journalism. For instance, would it be possible to have internet users not only report trends they find but also do the online digging to get to the bottom of the story? Outsourcing the research function would require large amounts of quality control as well, but could be an interesting way to source information as well as identify budding journalistic talent on the internet.

This article is an interesting introduction to the world of soft robotics. On hearing about robots inspired by octopuses and more “gentle” or “soft” robots, I immediately thought about when soft robotics would be applied to humanoid or human-like robots. For example, is a human-hand one of the most efficient ways to gently pick things up, and if so, would it be useful to start 3D printing “soft” robot human hands? In addition, could this technology be combined with “hard” robots and applied to creating more realistic and responsive prosthetics for amputees?

In response to your question about whether to focus on a single application or develop widely, I think that it depends on the economies of scale for additive manufacturing. I would compare a 3D-printing production process to a job shop, where there is a fixed set-up time and then a variable production time for each unit. Whether to focus or develop widely will depend on whether the setup time or variable production time is the bottleneck for each process.

On November 15, 2018, JP commented on Machine Learning in Radiology: Threat or Opportunity? :

November 15, 2018
JP says:
This essay is an interesting take on the future of radiology in a world of Artificial Intelligence. I disagree and think that AI does have the potential to replace or at least drastically reduce the number of jobs available for radiologists in the future. Radiology is not a patient-facing specialty. They work mostly in dark rooms on computers and imaging results are delivered through physicians of other specialties. Therefore, the human aspect of radiology is not as relevant. Radiologists do take into account the clinical situation, but it is easy to build an algorithm that could also take into account the clinical picture of a patient when reading an image. In fact, it might be better to have an “unbiased” read of the image, so that the clinical picture does not bias how the image is read.

Your final question on whether or not patients will learn to trust algorithms is also interesting. I also wonder whether providers will learn to trust algorithms. From my experience in the hospital, almost all physicians look at their patient’s images with their own eyes before looking at the radiologist’s read of the image (at times completely disregarding what the radiologist says). I wonder if physicians will actually trust an algorithm’s read of an image more than they would a radiologist that they know has greater variance in accuracy. It is also interesting to think about whether patients will at the point of care who or what read their image. Will regulation around AI in radiology require physicians to disclose that the results were generated by an algorithm, or will the physicians more likely say that they themselves read the scans?

On November 15, 2018, JP commented on Machine Learning in Radiology: Threat or Opportunity? :

This essay is an interesting take on the future of radiology in a world of Artificial Intelligence. I disagree and think that AI does have the potential to replace or at least drastically reduce the number of jobs available for radiologists in the future. Radiology is not a patient-facing specialty. They work mostly in dark rooms on computers and imaging results are delivered through physicians of other specialties. Therefore, the human aspect of radiology is not as relevant. Radiologists do take into account the clinical situation, but it is easy to build an algorithm that could also take into account the clinical picture of a patient when reading an image. In fact, it might be better to have an “unbiased” read of the image, so that the clinical picture does not bias how the image is read.

Your final question on whether or not patients will learn to trust algorithms is also interesting. I also wonder whether providers will learn to trust algorithms. From my experience in the hospital, almost all physicians look at their patient’s images with their own eyes before looking at the radiologist’s read of the image (at times completely disregarding what the radiologist says). I wonder if physicians will actually trust an algorithm’s read of an image more than they would a radiologist that they know has greater variance in accuracy. It is also interesting to think about whether patients will at the point of care who or what read their image. Will regulation around AI in radiology require physicians to disclose that the results were generated by an algorithm, or will the physicians more likely say that they themselves read the scans?

This is a fascinating look at a company that is applying a “hot” technology to a huge unmet need. The author poses interesting questions about the limitations of size, tissue innervation and vascularization in the technology becoming clinically relevant. Another very important variable to consider is the possible immune reactions to the 3D-printed organs. One of the huge advantages of 3D-printing is that it can be customizable for each patient and will not have the major antigens that a patient’s immune system will reject. The cost-effectiveness of the technology will be hugely affected by how much Organovo’s products improve patient tolerance to implanted organs. The major costs for organ transplantation are not the procedure, but the life-long immunosuppressive medications, treatment for complications of immunosuppression, and management of transplant rejection.

It is also interesting to consider how the development of NovoTissue will affect their ExVive business. There is important scientific knowledge to be gained from testing new drugs and therapeutics on models of whole organs, rather than just cell culture. The new drugs may have an interesting impact on cell organization within an organ that can have a huge impact on the organ’s function.

On November 13, 2018, JP commented on Buoy Health’s mission to debunk Dr. Internet :

This is an excellent overview of Buoy’s technology. Having personal experience with the company at its founding, its core technology is based on a static algorithm that was based on inputting medical literature as a provider would. The machine learning component comes only as Buoy starts to get more unbiased data. Obtaining such data is difficult as many users of buoy are people on the internet who do not ultimately report what their final diagnosis is or report how “reasonable” their possible diagnoses are. The data that they do get are people who go to seek medical care and therefore it is likely biased toward more serious conditions.

Although the technology is excellent and becoming more “conversational,” it misses one crucial element in real-life interactions with a physician. The ability for a patient to “free-text” rather than answer multiple choice questions. Valuable information is often gained from understanding both the more qualitative nature of a patient’s symptoms as well as the story/time-course of symptoms which is slightly more difficult to capture in nuanced detail over a chat.

Finally, Buoy was created to serve as a gateway to providers to filter out those that don’t need medical assistance and to encourage those that are seriously ill to seek care immediately. For the latter camp, Buoy’s algorithm, graphics, explanations, and recommendations are missing one thing: comfort. There is nothing quite like looking a parent in the eye and telling them that in a day or two their child will be back to normal. There are some patients who will always crave that sort of interaction, even if the ultimate recommendation is rest and hydration.

Despite the reservations that I’ve expressed above, I think that the algorithm Buoy is currently refining has huge potential in the healthcare space. The problem to solve now is where best in the health ecosystem it adds the most value.

This essay is an interesting overview of Cigna’s internal initiatives and acquisitions related to artificial intelligence. In response to the questions posed, the acquisitions are a good substitute for R&D investment, because the acquisitions are affecting Cigna’s core business of predicting and (in this case) averting risk. Their strategy of trying to prevent bad health outcomes is a potent combination of saving money for Cigna and gaining the trust of their customers by showing that they are trying to promote better health and not just trying to make money by increasing premiums.

I’m curious if or when artificial intelligence will completely take over the risk prediction and premium determination and what sort of ramifications that will have in terms of discrimination based on intrinsic factors (race, sex, etc.) as well as behavioral factors. Most would agree that determining premiums based on intrinsic factors is unfair given that many health inequities between race and sex can be traced back to structural inequality. However, is it ethical to raise premiums based on behavioral factors? Rising premiums would be a potent motivator to start exercising and eating healthy, but are the ability to do those things also divided unequally throughout the population.