iaw507

  • Student

Activity Feed

On November 15, 2018, iaw507 commented on How Stryker Hopes to Win with Additive Manufacturing :

It’s interesting to think about how a company approaches branding these novel approaches given some of the history of its other products. Stryker had a slate of recalls in the early 2010s over some of its knee replacement guides, which didn’t function properly and triggered an FDA Class I recall, which to my understanding is one of the most serious recall repercussions it can apply (see https://www.fiercebiotech.com/medical-devices/fda-slaps-most-serious-label-stryker-s-surgical-recall). Much of the discussion around surgeon buy-in makes me wonder what kinds of exposure to the product would be necessary for key opinion leaders to become advocates for using this.

While I’m not as versed on the technical details of these machines, I am curious whether these printers can cost-effectively produce all the types of materials needed (imitating bone, soft tissue, biological elements, etc.) enough to be housed on-site in places like hospitals for versatile types of models. To the point above it is definitely intriguing to think about possible advantages to offering this kind of customization closer to the user.

On November 15, 2018, iaw507 commented on Ford Races Ahead in Additive Manufacturing :

This was fascinating, thanks for sharing! I think one thing I’ve been curious about is how the 3D manufacturing of personalized car parts will affect the repair process and the levels to which consumers start to tweak vehicle electronics or other components at home. Beyond the point about opportunity costs for investment dollars, I could also imagine scenarios where there are technological conflicts between requirements for autonomous vehicles and 3D printed, consumer-led designs for cars. As vehicles become engineered for autonomous vehicle capabilities, I wonder whether there will be trade-offs in ensuring the safety and security of certain systems required for AV operation while simultaneously providing enough flexibility for 3D customization/consumer-led designs.

On November 15, 2018, iaw507 commented on Airbnb: Utilizing Machine Learning to Optimize Travel :

Thank you for sharing this! The point about technology risks for ranking users, hosts, or otherwise encoding existing human bias into these algorithms seems pertinent given Airbnb’s particular history around this issue. In 2015, a widely circulated study from HBS found that guests with historically African-American names were less likely to be accepted for reservations compared to white guests, controlling for other factors (see https://www.fastcompany.com/3054520/airbnb-hosts-discriminate-against-black-renters-study-finds). Unfortunately, that discrimination can directly feed into the streams of data that inform pricing algorithm models or personalized search rankings (how many people have booked with you, the kinds of guests you’ve accepted in the past, etc). In 2016, AirBnb retained former Attorney General Eric Holder to create an anti-discrimination policy given these concerns, and Airbnb is still working to implement some of those recommendations. I would like to see the advances of these algorithms be complemented by a robust and ongoing assessment of indicator stats that signal how these online marketplaces are addressing discrimination risks (examples could include acceptance rates for guests by race, gender, or other factors; effects on submission rates when removing host photos from search results; encouraging instant booking, etc.).

On November 15, 2018, iaw507 commented on Leveraging Machine Learning to Reduce Spam on Twitter :

Thank you for sharing! I continue to be fascinated by the uses of machine learning on these social media platforms in light of reporting in recent years on fake news. This made me think about the research I’ve seen from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) about using AI to determine accuracy of sources and political bias (see https://venturebeat.com/2018/10/03/mit-csails-ai-can-detect-fake-news-and-political-bias/). The researchers are trying to create an open source dataset with bias scores that could be eventually used on platforms like Facebook and Twitter to detect questionable content. To your question about how to deal with the accuracy rate, I wonder if they can tailor the process by probabilistic tranches, and use a combination of human intervention and machine learning, where machine learning essentially reduces the burden and staff time for near-certain cases. For instance, the algorithms can rank accounts or material by the level of uncertainty, so that human staffers can provide a second check on a smaller pool of accounts and reduce the limitations of the AI solution in isolation.

Thanks for sharing this! It was interesting to hear more about how Alexa is trying to stay competitive in this market through open innovation and skills development. I definitely can see the benefits of outsourcing the R&D to a wider community that can uncover additional offerings for a diverse user base. I am curious to see how much Alexa is able to improve from using Cleo, their gamified skill within the Alexa app that they launched in March 2018 to encourage non-native English speakers to provide feedback on local dialects (https://www.scmp.com/tech/social-gadgets/article/2138204/amazon-wants-your-help-teaching-alexa-new-languages-and-it-could).

On a different front, the open community approach to developing skills has led to some fruitful efforts to provide sign language reading capabilities. Recognizing that the focus on voice assistance has excluded the deaf community, a developer used the machine learning platform Tensorflow to create a web app that reads sign language through a laptop camera and “speaks” the message to Amazon Echo, then encodes Echo’s response and types it out (see https://medium.com/syncedreview/signing-with-alexa-a-diy-experiment-in-ai-accessibility-57e4407af539). The app seems more like a proof of concept than ready for regular use, but an interesting take on how Echo could differentiate itself and serve an entirely different community of users.

Thanks for sharing! The role of the private and public sector in deciding which innovations to fund and how to support scaling is such an important discussion in global health. The issue I have with open sourcing product development through some of these philanthropic competitions is that they often presuppose the parameters of “success” without incorporating local contexts or user needs.

To Kai’s point, the competitions can overindex on technical or financial barriers (e.g. the toilet must be produced at XX cost, or be able to operate without connection to a sanitary system) at the expense of desirability or critical social, cultural, and behavioral factors. An example that comes to mind is of PlayPumps International , a company that created water pumps designed as merry-go-rounds for children to play on [see https://www.theguardian.com/commentisfree/2009/nov/24/africa-charity-water-pumps-roundabouts for more information].

The concept was for children to play on a merry-go-round, which would simultaneously pump water to be stored in an elevated tank. The company was the beneficiary of a $60M public-private partnership with the US President’s Emergency Plan for Aids Relief (Pepfar), and in addition to direct U.S. government aid, garnered endorsements from George and Laura Bush, Jay-Z. Unfortunately, the well-intentioned project went significantly awry, for reasons that include risks involving the level of child labor, injuries, high O&M, and in part because children are not always playing when demand for water may be highest (evening, early morning, etc.). While I see a role for philanthropic institutions to help seed early stage ideas and address technological barriers, I think we frequently discount the risk that they undervalue the kinds of user experience criteria that lead to commercial adoption at scale.