Peer Effect

Mastering AI Integration to Future-Proof Your Business, with Eric Daimler

James Johnson Season 3 Episode 25

The future of AI in business? It’s already arrived.

Eric Daimler, a titan in the AI space with over 20 years of experience, joins Peer Effect to show you why. With a background that spans advising the White House, co-founding over 6 tech startups, and leading Conexus AI, Eric brings some incredible insights into the transformative power of AI.

Together, we dive into:

• The importance of timing and adaptation, and why philosophies like "Blitz Scaling" and "Failing Fast" may not always apply to your business.
• How getting specific will help your business avoid inefficiencies, enhance collaboration, and leverage AI technologies more effectively.
• The value of learning from a variety of real-life experiences and being selective about the advice that founders should incorporate into their business.

Discover more about Eric Daimler's work at Conexus AI or follow him on LinkedIn for more exciting insights.

More from James:

Connect with James on LinkedIn or at peer-effect.com


Speaker 1:

I've been doing AI for 20 plus years way longer than I care to think about as a researcher at Stanford and Carnegie Mellon, as an entrepreneur on my sixth startup, as a venture capitalist on Sand Hill Road. But often how people know me is the time that I spent in Washington DC as an AI advisor during the Obama administration in the White House. So I am working to commercialize these often sophisticated technologies that we develop in academia into successful businesses. That's what I've been doing my whole life.

Speaker 2:

So I think what's interesting for this podcast, you've come at this AI from, let's say, from a technological perspective, from a VC perspective, like you've done six startups in this space. I imagine you've got some interesting things to share. What is maybe the one thing that you would share with other later stage founders?

Speaker 1:

I've talked to other, we'll say, very successful entrepreneurs. You know, ones that we read about in the press. Their advice to me that I've really taken to heart is that timing plays a big role to the degree to which they're successful or not successful, but certainly to the degree to which they are big or not big. It's our responsibility to just grow into those roles as the companies develop. What I've also noticed I sit on this entrepreneur advisory board for one of the schools I went to and they have these people with accomplishments in their careers accomplishments in their careers coaching the young entrepreneurs in a way that you might think is just a fantastic cycle of life. I guess you might say People that had spent their careers in one area and then giving advice to the next generation. What I found from that is that the world actually does change. There's some constants and there's some things we as people, as a society, as a species, relearn, but in this domain, in growing technology companies, I have noticed that the world does actually change. It's really difficult for new entrepreneurs to get good advice, to get good advice from people that may have been just in a different time, let alone a different industry. It doesn't translate between social media and semiconductors, for example. There's some universal business rules, to be sure, but that advice translation is tough.

Speaker 1:

The result is that we are learning on the job. We are learning to the point of this podcast from each other. We are developing ourselves as the world develops. I tend not to read many business books because I think they're just mostly awful, but I do read a lot and I talk to a lot of my peers. So, for example, actually in our business, we are currently developing and commercializing these discoveries around artificial intelligence. We go where the love is, so that often finds us outside the US. We are a global company much, much earlier than wisdom would suggest that we are because of all the reasons, and we have learned and modified our approach in light of the responses, generically what people do. But we're coming out at this particular time with this particular technology and it's forced us to learn and develop like everyone does. Our jobs are changing. The people we hire have different profiles and that's a dynamic I think I wouldn't have even been familiar with 10 years ago.

Speaker 2:

So the game's changing quite rapidly. Who do you go to advice for if the game is changing rapidly?

Speaker 1:

I go to my friends, I go to other founders. I tend not to go to events where they're just generically meeting people. I will just listen very carefully about how I need and my team needs to grow and change. It's our job to recognize what fits and doesn't fit.

Speaker 2:

What is the main lesson that you would take from this type of business that you think could be applicable more widely, because a lot of the lessons were shared around sort of blitz, scaling or sort of this kind of fail fast kind of mentality?

Speaker 1:

Oh, I hate that term, it's just a terrible term. I understand the concept of fail fast, but yeah, both the blitz scaling and of fail fast, but yeah, both the blitzkilling and the fail fast is so much around these businesses that are not mine. They might have been fine in that other context, but they're not mine. In my business, lives are at stake, or, if not lives, the enterprise's success is at stake. In the computer science lingo we'd say we're at the kernel level. We are dealing with a level of infrastructure that has consequences. You can play around at other levels, not to diminish the value of customer service or the values that somebody might get from using chatbots, which are today's expression of large language models. But that's not where we play, and so it's just not the same. Having said that, we're not the ones actually building rockets, but we are supplying the software to the people that are building rockets. So we don't allow ourselves to be afforded the luxury of an early 2000s era NASA style software engineering sequence where, you know, I take a decade to build a rocket, a decade to build a rocket, but you know, I also, you know, can't afford to just fail and iterate with too much rapidity.

Speaker 1:

The analogy is to a book that I do think I can recommend with some degree of comfort, which is Daniel Kahneman's Thinking Fast and Slow. So that's a good book, you know, not the traditional book that one might recommend, but I'd say that book's had a profound impact on decision-making and my understanding of decision-making and how it applies to building a successful company, because the distinctions of thinking between fast, intuitive systems and slow, deliberate systems they help me make better decisions. So I recognize these biases and heuristics that can cloud my judgment and then rely on that systems thinking or the slow, contemplative thinking when making important decisions, important decisions and also learn the importance of creating an environment where my team feels comfortable challenging assumptions and hearing the diverse perspectives from some of these decisions. So I can recommend that book to any founder that wants to improve their decision making skills to build a more successful company.

Speaker 2:

I was going to ask what is then? Presumably you still need to experiment. There's still the risk of failure. How do you get that balance right then, when the consequences are more severe for getting it wrong?

Speaker 1:

that's a great question and it speaks to the future that we envision, which is that uh, the future is formal, or uh is technical way of saying it. I guess formality in a computer science sense, but it's a, it's an increased skill around being more specific. So we I'd say individuals need to be really careful about clarifying what they mean. The future of scaling complex systems requires people to increasingly specify their preferences for a whole range of circumstances, whether it's maximum area surface pressure for an oil well, or whether it's certain budgetary constraints for a planning cycle inside a growing company.

Speaker 1:

Planning increasingly large organizations itself can take the whole quarter in which the quarterly planning was to exist, and it's impaired by the difficulty of exploring the what-if lattice. I want to bring AI to that, effectively doing it with specific preferences interacting. So the skill set is people talking a little bit more like lawyers, lawyers, then talking a little bit like engineers and engineers talking a little bit more like lawyers, lawyers, then talking a little bit like engineers and engineers talking a little bit more like machines. Once we increasingly discipline ourselves to specify our preferences with that extra degree of rigor, we materially increase the velocity with which we are collaborating, which I personally think is the killer app for AI collaboration.

Speaker 2:

It's really interesting because one of the things I talked about with founders and coaches is the idea that if you want to be a no excuses leader, part of that is about communicating really clearly, and it sounds like this idea of being very specific when you set out your expectations, whether it's planning, whether it's whatever it is. Ai could have quite an interesting role to play in that. In terms of if you can be specific, that could have quite a lot of benefits, not just the company level, but maybe even with your team. How deep does this AI sort of support go potentially?

Speaker 1:

It goes all the way down. You know turtles all the way down. It's the whole game, the whole business. You know some very large companies exist to coordinate subcontractors. To coordinate subcontractors, that planning exercise is often Excel models shared via email attachments in multiple iterations.

Speaker 1:

How much better would it be to feed those preferences just into an AI and have them, in a proverbial snap of the fingers, reconciled. You want to reconcile those to every new preference. In the time it took to submit the preferences, that opens up a whole new world. No longer do you need to wait a quarter, a month, a week, even a day before you might see the preferences incorporated into what a new planning cycle might look like for a company. And, more importantly, the AI can detect logical contradictions. Ai really predicting the future a little bit, giving possible little futures that you need to triple check because of all the confabulations. The AI of the near future will be able to detect and possibly correct logical contradictions across increasingly complex environments. So the skill set for a team, for an executive, is to practice being more specific and developing the tools that facilitate that specificity.

Speaker 2:

More specificity would be a good thing for all leaders and founders to deploy. Anyway, I'm curious as to so it feels like then you put AI in the mix and suddenly you can, even if you're a smaller business. It sounds like you're going to get some pretty significant payback in terms of speed and better outcome from deploying AI into your planning and kind of collaboration processes.

Speaker 1:

That's right, that's where the tools are going and that's how it happens today. What these large companies do is they often will formalize not their specific outcomes, but they will actually just specify a process. And what we discover with some of these larger companies is they actually forgot that the purpose of their company is to get from London to New York, and instead they just know in their head that what you first have to do is get down to the port, and then you have to load the ship, and then you need to make sure the crew is on board, and then you need to move the ship, and so they don't have a transportation problem anymore from London to New York. They have a ship problem. So even if you offer them an airplane, they just say well, I don't understand, I just need my crew to be on board and you're like no, no, no, it's totally different. Ai is an airplane, so I don't have a ship from you.

Speaker 1:

That's how calcified some of these large organizations can be. They survive because of the enormous benefits of scale and a common language for collaboration between people inside large organizations. But as this technology develops and people become more comfortable with being more specific in their own desires, then little sections of these large organizations will be able to break out and collaborate with all other sections of other large organizations and you can create these virtual networks that synthesize outcomes. And that's where the future is going. But it's enabled, started by people.

Speaker 2:

Being more specific, what's the sort of secret sauce then? For someone who's like Series A, series B, they've got about 50 headcount. They're scaling. Probably got a bit of a management team by now.

Speaker 1:

Probably got a bit of a management team by now.

Speaker 1:

So we don't have a massive bureaucracy at the scale we are at.

Speaker 1:

But planning could have taken longer had our team not been trained in the discipline of laying out on the first go their preferences, desires, their constraints, with as much specificity as their particular job allows. You know, had in the old style. This really still happens in companies, friends, companies that I talked to is you'll have one person that really is unable to make that transition and wants to do the proverbial hand-waving, whether it's often literally in person or by some sort of video conference or by email, where they can have some beautiful language, but it's all in English or Arabic or Japanese or what have you. That just doesn't get to the point that somebody else then needs to interpret. You have that whole negotiation cycle that really slows the organization down. You don't have objectives around which you can optimize. Everybody else has to then be slowed to that slowest team member where they're bringing them along. And this becomes more evident as more people become facile in the skill of being more specific or more formal, in the vernacular of a computer scientist.

Speaker 2:

This gives me slight flashbacks to very early in my career. I worked for an Australian guy who made me write out any problem in one sentences in an Excel. He would just go. You had to break down the problem into one sentence, into cells. If you couldn't fit it into one cell, you had to break it into two cells. It was like that problem is not well defined enough and the answer had to be in one sentence as well. Honestly, I still it's given me slight sort of back to this. It's late at night trying to think of one-sentence solutions.

Speaker 1:

So the reason this happens.

Speaker 2:

Go ahead. No, it sounds sort of similar.

Speaker 1:

Yeah, it sounds very similar, if not identical. The reason this happens is because you have to put your problems in a form that a machine will understand. You can't expect the machine in the current frame People are getting led into this idea that I can put my thoughts and dreams into a large language model. It'll give me a beautiful poem Great but it's not going to design your airplane. If we actually get serious with some hard problems, I need the machine to understand my problems. If I want to understand my problems, then I need to give it the problem with a specificity that it can inject. It needs to input those problems. So I need to do exactly what you described, or close to what you just described. It's not exact. I need to specify the problem so the machine can take them in. The better I do at that, the better outcome I'm going to get and, to the point, the easier the collaboration will be with other team members that are in other time zones and that have other sensibilities. Even if you have a mechanical engineer and an electrical engineer, you know those people might have the same similar sensibilities, but they need to collaborate and today they will often have to speak in English to reconcile engineering models. How much better would it be to have an AI collaborate? In order to do that, you have to have these models be machine readable. That's really where it's going.

Speaker 1:

This applies to banks and risk management, it applies to power systems, it applies to any organization that does large planning. But for the company series A, series B how do we implement it? We just implement it by having everybody be extra disciplined in the clarity with which they specify preferences, and that isn't facilitated by software right now. That's us interacting as people, because our organization is not 1,000 people. Interacting as people because our organization is not a thousand people. Uh, is it or a thousand or a hundred thousand people? Uh, we, we are sharing the same sensibility, so we can feel what they're going through and talk through it with uh uh, more clarity with them so I think I know the answer to this, but I want, rather than assuming, I'm going to ask.

Speaker 2:

So I imagine when people first hear this, they go well, that takes, that's going to take a lot of time. Like, oh, I, I don't get to hand wave, I have to really think about this and break it down. That's going to take me a lot of time. Yeah, I don't have a lot of time, therefore, I'm not going to do it. What? What would you say to that?

Speaker 1:

We've all set through presentations where it's very clear the presenter had that mentality of not taking a lot of time in their presentation. So it's just a series of eye chart bullet points. The time then has been transferred from the presenter to the consumer of the information. You know easy writing makes for hard reading.

Speaker 1:

Hard writing makes for easy reading. So here great. It'll take you a little bit more time up front to think. Great, no kidding, no kidding. I'd love to not think, everybody would love to not think. But I'm just pushing my thinking onto somebody else and for that your career has a shorter half-life.

Speaker 2:

I talk to my clients about this a lot, this idea of the 110-100 triangle one hour spent planning saves 10 hours, controlling saves 100 hours fixing and really this idea of not doing the thinking. It's pushing the thinking to other people but it's also creating all sorts of complications and problems down the line. When, say, it's quick and easy to hand wave but give it then a month later when the project's gone wrong and you've got to redo the whole thing, you've created way more work than the 10 hours you save by not thinking. Or five hours you save by not thinking has created 500 hours of fixing time and just cost from the whole thing not being fit for purpose.

Speaker 1:

So this has quantifications available for it in computer science that people can look it up Every time that you don't find an error in code. The farther down the life cycle of a product, the more costs are put upon the fixing of that error. But this is contextual to the seriousness of the system in which that operates, and so I'm not saying that every problem demands the rigor of developing a rocket. But we can all benefit by being aware of this X degree of specificity and we'll find benefit in collaboration with our colleagues internal to our organization and external to our organization. So that's a lightweight way of implementing it today that benefits our team. We'll find that we are able to operate more quickly because we would be able to change direction and explore opportunities with more speed.

Speaker 2:

So, if we wrap this up, it feels like being more specific pays multiple dividends in terms of time saved and speed of outcome, better decision making, unearthing these hidden problems that may not exist, but it feels like then understanding how we can really speed up those collaborations and unearth them. Quicker is going to be where you can really speed up those collaborations on Earth, and quicker is gonna be a real force multiplier for anyone when they're looking to scale their business. It's something that they really should be aware of.

People on this episode