About This Episode:
Adam Molnar is the Co-Founder and Head of Partnerships at Neurable, the beautiful office in historic downtown Boston where we filmed this episode live.
Neurable was gracious enough to lend me one of their conference rooms to meet inspiring local founders & entrepreneurs, and I got a first-hand look at their upcoming tech.
Neurable makes brain-control interfaces or BCIs, and their tech is so discreet and non-invasive that it can literally fit into a pair of normal headphones.
That’s right, just by wearing a normal pair of headphones, information from your brain can be captured, allowing you to do truly futuristic things like control Spotify using only your mind or better understand your focus and natural cycles to optimize your productivity in ways we never dreamed of.
With their ground-breaking tech, it’s possible that in a few years every pair of headphones you buy will have Neurable baked in, ushering in a new era of both computing and self-awareness.
Full Audio:
Links:
If you enjoy the show, please rate it 5 stars on Apple Podcasts, subscribe, and leave a nice review!
EPISODE HIGHLIGHTS:
3:51 – “The standards that we use for EEG have existed for decades, and we still use them as the 10-20 standard array. This is marking a somewhat of a substantial shift away from what has been known in laboratories to bring that out into the wild, and there’s a lot of inspiration from wearable devices where heart rate monitoring has existed in hospitals for many, many decades. And it’s just now that it’s commonplace to wear an Apple Watch or a Fitbit. But that’s also a device that tracks your heart rate. So to the same extent that we’re putting technology in an invisible way into existing form factors, that’s what we’re doing with our next product and future products.”
5:00 – “What we’re providing people, really for the first time, is an unfettered access to yourself from an organ that has historically had no pain receptors. It’s not like a heart that you can feel when it gets, when you get anxious, or a muscle that gets sore after a workout, there isn’t an equivalent of that for the mind. So Brain OS, which is our platform, that first version is going to give a level of insight on a consistent basis that’s validated and just works in a form factor that you don’t feel like you’re part of a science experiment really for the first time. And that’s really exciting.”
5:38 – (Ross) “And earlier this morning we had a call…and they were sort of showing some of the potential applications, again, for people who have had difficulties, people with spinal cord injury, people who have cerebral palsy or various things that prevent them from being able to do a lot of the things that many other people take for granted. So creating an interface with the brain directly has a number of incredible advantages across a wide number of fields, in addition to just the sheer convenience of being able to control something with your mind and say, ‘Spotify, play the next song,’ or something silly, right?”
(Adam) “Yeah. I mean, what you’re touching on is the root of my co-founder’s starting thesis, the starting vision that really got all of this into motion, which is that for better or worse, there are people who just get limited by life, whether that’s a physical limitation or cognitive limitation, we don’t like to think of them as limitations. These are just things that we can build around that we can solve, that we can equalize.”
10:57 – (Ross) “The part that always confused me and I’m sure it’s quite straightforward, is just how can I send the signal for doing something completely without actually doing it?”
(Adam) “Yeah, well, see your body is funny in the sense that it somewhat is a filter that, for example, people will refer to your finger movements or the electrical activity in the muscles in your hand as an extension of your brain. Because ultimately there is an electrical signal that does go from your brain into the muscles, into your hand to then manipulate a keyboard click or whatever. So how we think about what is neural signals is subject to interpretation. We do these things biologically that filter that signal out for a more specific intent. Like if I’m typing between yes or no to answer a question and I type yes, that’s not you necessarily reading my mind. That’s my neural signal translating into an input that answers your question. And then on the algorithm side, we also do something similar where we have different labels we filter in a specific way to look for insights or data that is inferential of a target answer. And that’s how we build and train these things to then build up capabilities to power features that then give an end-person benefits. It’s these series of translations.”
13:08 – “I was listening to a podcast today on the train from New York to Boston and, as one does while listening to podcasts, I start mind wandering. I started thinking about a new solution to a problem, but then ‘Shoot. I missed where I am,’ and one of the things that were prototyping in the office very soon is a feature to essentially bookmark parts of a podcast of where were you really dialed in versus the parts where you might have drifted off so that either it will auto flip back to you or it will slow down the speed of the voice so that you have an audio cue to get back to it or a report of what were the parts that were the most tasty for you and what were the parts that maybe you were a little bit distracted. These are practical applications of what’s now possible because we have this fairly reliable estimation of attention. Yeah, and that’s just step one. We also have things around fatigue. We have early primers for stress and biomarkers of neurocognitive diseases. So yeah, so that’s the bottom limit.”
15:03 – “We have an algorithm right now that I can speak publicly about called Take a Break, where it identifies cognitive decline and then just gives you a cue that says you would benefit better from a break now than if you were to wait x-however minutes. And this is a pilot that we ran with the Mayo Clinic and something that we’re continuously proving out and is very exciting. But that’s just level one – when you should take a break. The next is when you take the break, how recuperative was it to you? How much did it restore? What was the equivalent to battery charge that you got from it? And that’s level two. Level three is then starting to curate tailored intervention, specifically to you. Now you, at your current disposition, would most benefit from a five-minute walk, a seven-minute meditation, or maybe half a cup of coffee. And these are real things that you can start to get to when you start measuring things and getting these data points at scale.”
17:23 – (Ross) “But on an upper limit, you can really, if you are the type, really optimize your life, you can optimize your productivity perhaps, and you can start figuring out how you work and start making better, more informed decisions about structuring your day, which could have pretty far reaching implications for the way that all of our days are structured, I imagine, if that data becomes more readily known.”
(Adam) “Absolutely. And there is a risk associated with that, we don’t want people to get obsessed with the optimization. What we’re trying to communicate and have people benefit from is access to a new data source where you are able to have a better understanding. You can color in this second guess that you might have had. You just have another reference point to better understand yourself. And that’s what we’re trying to accomplish. But yeah, totally. It could be used for optimization, it could be used for productivity improvement, it could be used for reducing strain on your cognitive profile.”
20:29 – “Several years ago, this reporter got in touch with me from Reuters and was just trying to get a sense of where neurotechnology stands in the field of ethics. And I told him, our company won’t sell your data. That’s not our strategy. We may use your data from a de-authenticate source to improve algorithms, create new capabilities, return new value to you. But we’re not going to sell who you are to someone so that they can engineer a better advertisement or an insurance company so that they can jack up your rate, which is what is happening….you are the owner of your data. And when we launch our product, you’ll have the ability to delete your data, to delete your account. We are transparent with what we do with it. We ask for you to opt in. We explain what we’re going to do with it.”
23:55 – “So from a marketing or advertising standpoint, obviously Coca-Cola would like to know that for the 30 seconds of their latest commercial, you were fully engaged and that the dopamine was jacked up to the max so you have nothing but positive associations. So it’s easy to understand how a marketer or an advertiser would want to make use of that data because it’s like the holy grail of understanding what somebody feels. And then you can AB test a thousand different versions, so this one had the most. But of course extrapolating that a bit further, when you’ve got algorithms controlling our lives like TikTok, the ability, of course, to really, really make somebody addicted to a platform and say like, ‘Okay, we’re not going to let you off the hook here. We’re going to keep that dopamine perfectly dialed,’ has pretty far-reaching implications as well, especially when you consider that a lot of the people who are most active on these types of platforms have no idea that these things are learning from them or are feeding them stuff that is designed to hook them to the maximum possible degree.”
(Adam) “I was speaking at MIT with fellow esteemed individuals who talk about neuroethics, and one person was trying to make the case that neuroethics is unique because it has long standing implications that could manipulate our behavior. And I said respectfully, I disagree that we are currently engaged with many activities that we don’t fully understand how they manipulate our behavior. Social media being a great example – shitty example, but a great example – and so I see this tool as potentially even being a boon in this or like a remedy where you could at least be able to monitor it and then be able to build out interventions for it. And that’s why my platform, when you give me a soapbox and you let me go, it’s not about technology, it’s about protections. It’s about transparency. It’s about…consent and more than consent, what is informed consent? As opposed to forced consent, which is also–”
(Ross) “Click okay, or you can’t use this product.”
(Adam) “Exactly. How do we build in the option to be forgotten? How do we prevent surveillance? People don’t necessarily mind insights being revealed. At that same talk this topic came up about concern about the ability to identify some kind of cognitive disease – onset, early. So the moderator asked the audience, ‘How many of you would want to know if you’re predisposed to Alzheimer’s?’ And some hands went up and she asked the opposite, ‘How many of you would be opposed?’ Some hands went up and I made the point that what just happened was supreme – you gave them the option. They understood what they were receiving, and that is how we should build these systems. We should build it with consent from the people that it involves. And then when it starts to go beyond the user, when it starts to become third-party sales or commoditization of data, that’s when policy needs to come into – well I mean, policy needs to come into play at all of the levels – but that’s when I think we really do have an obligation to ensure that these systems are being used in an above board manner.”
29:58 – “From our part, whether or not – and I do anticipate that we will be but for the sake of an argument – whether or not Neurable will be here in ten years, the decisions that we make, the culture that we set, and the precedent that we put with our products, essentially facilitate other companies to follow that or force other companies to follow that. Because if they then do anything less, then that’s no longer the norm. So I think we have this little obligation, which I think is a fairly major obligation, to do what’s right, and that’s how we carry ourselves.”
32:40 – (Ross) “I don’t believe when Mark Zuckerberg created Facebook, I mean, obviously he wasn’t seeing 400 steps in advance to, ‘Oh, this could be used as a political tool.’ ‘Oh, this could manipulate an entire election,’ when he thought, ‘let’s connect a bunch of people’s friends together on a platform.’ I don’t think he realized that memes would be so powerful as a means for manipulating people’s thoughts, for example. And these algorithms that choose to prioritize one thing or another, relatively small decisions. You say, ‘I’m just going to show you things that you like versus things that you don’t like.’ Seems harmless enough, right? But then you get echo chambers. Oh, it turns out there’s two different communities that never hear from each other, and they believe opposite things because they only each hear what they like. Whereas the other point of view would be something that they don’t like, which they’re less likely to engage with. So we live in an era of unintended consequences where it’s very hard to predict what’s going to happen or what the ten-year-on, 20-year-on, 30-year-on consequence of a decision we make today will be.”
39:50 – “What I do have is this recognition that this is important and knowing that if I don’t try to make this a better place for the future, who will? And if I’m not spending my time doing that, then why would someone else? I was meeting with some founders the other day, and I was speaking with a guy who works in finance…I was telling that we’d been doing this for nearly ten years, and he goes, ‘Is it easy?’ ‘No, it’s really fucking hard.’…And he’s like, ‘Do you ever think about quitting?’ I’m like, ‘Yeah, honestly,’ What keeps it going? If it weren’t hard, it wouldn’t be worth doing. If I weren’t doing this, I’d be doing something else that was hard, I hope. The hope that the work that I’m doing today benefits someone tomorrow just helps get you up in the morning.”
42:48 – “I was homeless for a short period of time when I was launching my first company, working three jobs, living in an apartment, eventually, that was like under construction, didn’t have heat or hot water, which in Michigan winter is not necessarily safe. I remember having to, having already graduated, not having access to a library after hours and waiting for a new student to walk in so that I could sneak into the library too, and have a place to work out of, sitting on one wall of my apartment that was closest to the cafe behind it so that I could use their Wi-Fi. And these are all first-world problems. But these were hard times that made me recognize that it could be so much worse. So, when I have a shitty day, when something doesn’t go right, I’ve had it worse. At the end of the day, it’s really not that serious. I’m still alive, I still have my family, I have friends. I know that I’m loved. I know that I love. And those are all, I think, really critical things.”
56:01 – “Your brain likes to be right and it is more difficult for your brain to process ways to anticipate how you might be wrong, than to reinforce worldviews, to dictate how you are right, which directly affects this. That it’s more difficult for me to think about, ‘Oh, what part of what you’re saying might be true? Why are you thinking this way? Where are you coming from?’ Those questions require more processing power.”