Satomi

  • Student

Activity Feed

On November 15, 2018, Satomi commented on Crowdsourcing as the Future of Secret Cinema :

Crowd sourcing for secret cinema is a great idea. It’s such a big undertaking to organize, and given that you don’t know what it is before you attend these events, using crowd sourcing gives the attendees some assurance that it’s a popular movie that you would have seen or at least heard of or at the very least enjoy. Here is where the central limit theory likely holds – the idea that a large number of samples that are independent will likely yield to the “true” answer. In this case, crowd sourcing secret cinema movies will likely lead to a movie pick that is good and enjoyable, which means high customer satisfaction and good business for Secret Cinema.

Interesting use potential, and the environmental upside is certainly interesting, but I’m still not entirely convinced that it makes sense to use 3D printing for large scale projects. I see 3D printing as useful for processes that have long set up time and where there’s a high level of customization required. Practically speaking, cities should have a level of homogeneity (e.g. roads should have a certain system, buildings shouldn’t be too wacky), which means that traditional methods of construction may be better suited for large scale, homogenous projects such as a city. I’m not yet entirely sold on the application of this innovation.

On November 15, 2018, Satomi commented on Challenge.Gov – A Model for Government Crowdsourcing :

I don’t think governments lack innovative ideas – it’s often that implementing innovative ideas are difficult. Your suggestion of mandating that governments include aspects of ideas from an open source platform in their RFP is an interesting one, but I worry that it’s just adding another layer of bureaucracy to an already bureaucratic process.

On November 14, 2018, Satomi commented on Disney – “A Whole New World” of Machine Learning :

Extremely interesting, especially the other potential applications of FVAE. I like the idea of using FVAE to do consumer research, but I am less supporting of the idea of using machine learning to predict what kind of content should be produced. Call me a romantic, but I think art should be a creative process rather than an optimized, data-driven process to maximize sales. It takes the joy and wonder out of creative content if I know it was just made from algorithms. Also, the future can only be predicted if it looks a lot like the past, so using machine learning to help generate creative content means that we’ll probably end up with things that look a lot like what is already out there, which means there could be a stall in creativity.

On November 14, 2018, Satomi commented on Data – The Farmer’s New Edge :

Good questions re: how pesticide manufacturers will react to this new innovation. I actually see them as potential acquirers of this software, where they can pitch it as a bundled sale with pesticide to spray just the correct amount. Re: farms struggling financially, one could argue that if this product helps them save on excessive and wasteful use of pesticides, then it may be worth the upfront cost to purchase. It likely all depends on the price point of the product and what the value of the marginal benefit actually is.

I think I’ve used colour IQ the last time I went to a Sephora store! I really liked that the machine told me what I should buy based on my skin tone, type, and what I was looking for – it felt a lot more objective than when a sales person recommends something, because I always wonder if they’re recommending it because they get a higher commission compared to another product. That said, I fully recognize that this higher commission or special promotion could be baked into the Colour IQ code, but it just feels more objective because it’s a machine making the recommendations rather than a human!

Absolutely fascinating. This would be a use case for 3D printing that could save lives if it can be implemented in a safe way. I think there are a few questions AZ should think about
1) How does patent protection come into play in such a setting? I don’t see this as too different from generics companies producing drugs before the patent expires, so there may be a question of whether the scope of protection or the legal action that AZ can take differs depending on whether the generics are made for sale (generic companies mass producing) or individual consumption (presumably, 3D printing)
2) How it can differentiate itself, from a quality standpoint. Again, I see 3D printing as very similar to generic companies (or other non-reputable companies) making copies of AZ’s drugs and selling it. AZ has an advantage over DIY 3D printed drugs in that it’s quality-assured.

Additionally, 3D printing may be more applicable for chemical compounds. However there is a trend for biologics and personalized medicine being developed for new treatments, which from my limited understanding seems like less of a good fit for 3D printing since the production of biologics and personalized medicine is more complicated than the current 3D printing technologies. The question thus becomes, does it make sense for AZ to try and do something super high tech (i.e. print medicines) to solve a rather low-tech distribution problem?

On November 14, 2018, Satomi commented on Love in a Hopeless Place: Machine Learning at OkCupid :

Interesting read. One catch with the algorithm screening for people’s revealed preferences (i.e. messaging people who self-report to be shorter than the user’s stated preference) is that we also don’t know whether the other guy is lying about their height. This calls into question the whole notion of whether self-reporting data is useful, or if people should just post photos of themselves instead (which is what Tinder and the like tend to do).

The notion that our data footprint can tell us more about who we are than we know about ourselves is extremely interesting. Leveraging that for online data puts us in Black Mirror territory, which is a little scary but definitely not too far off in the future.

I also struggle with the issue of companies trying to leapfrog solutions in the developing country context. My view is that there is such a Silicon Valley mindset that’s being applied to international development, where engineers focus on sexy solutions (e.g. a machine learning enabled chat bot) and trying to fit that in to a problem (e.g. lack of access to quality care). Machine learning is a tool that can be used, and is helpful when there is scale, but from your essay it sounds like they have stopped the scale in order to first perfect the tool. That is a very interesting management choice and I would be interested to hear more why management decided to do this – in a setting where inputs are needed to accurately do predictions, I would have thought that scale would have helped make the chat bot more accurate as there would be more data points to understand 1) the types of questions users ask, and 2) how they ask those questions. Applying machine learning to language seems to be a very complicated thing (see: IBM Watson) and I question whether a Kenyan healthcare start up has a comparative advantage in pushing for this innovation.