All

AI Leaders – Interview with Kacper Bazyliński, Neoteric

Welcome back to another AI leaders interview! This series aims to show you the behind-the-scenes details of the AI business through interviews with automation specialists, experts and solution providers. 

In our previous installment, Techmo CEO Bartosz Ziółko told us what the voicebot world will look like in the near future. Today, Kacper Bazyliński, AI team leader at Neoteric joins us to discuss implementing AI from the point of view of a software house. Neoteric helps corporations and start-ups introduce new solutions through AI – whether it’s data exploration, predictive analytics, NLP or any number of nonstandard applications.

AI solutions are becoming increasingly common in business; from database automation to vast organisational changes under the digital transformation banner. Tell us what Neoteric does and what solutions does it offer its clients?

Neoteric is a technological partner for companies interested in innovating. We help them start their digital transformation journey – or see it through, as the case may be. We build applications, create UX, train machine learning algorithms; basically, we help our clients achieve their business goals through technology.

We’ve noticed a trend – companies, especially enterprises, tend to shift towards inhouse solutions – they build their own AI departments, recruit top talent et cetera. What do you think about this? What’s the advantage of hiring a software house when it comes to designing, building and implementing projects based on AI?

Enterprises can afford to make that move – they have the budget and time and already conduct R&D projects. In-house solutions have their advantages; after all, everything is controlled by the company. At the same time, you have to be mindful that assembling a team of top engineers or scientists takes time – and is a significant financial commitment. There’s also the risk that even the most sophisticated algorithm, which you’ll spend months developing, will turn out not to bring any value. It’s not about the algorithm itself – it’s about whether or not it can solve its intended business problem.

On the other hand, hiring a software house frees up the company from the responsibility of scouting and hiring talent. After all, even the largest enterprises don’t start out knowing what to look for in a candidate. A software house, however, already has a portfolio of completed projects and it can prove its worth with real results.

For us, the client’s business is our top priority. As such, we want to minimise their financial risk. Before committing to an implementation, we ask a couple of difficult questions and test various solutions. This approach works both for medium-sized companies, which don’t want or can’t afford to have their own AI team, as well as large players who require an agile team with a knack for business and rapid testing of hypotheses.

What would you consider to currently be the biggest challenge in AI implementations?

Getting off the ground. Despite the fact that we constantly see successful projects and the benefits of AI, we need to remember that about 80% of implementations end in failure. The collaboration between IBM Watson and the University of Texas is a perfect example. The solution didn’t bring the results it needed, and yet it cost $62 million.

In my experience, the biggest challenge is strategy. A well thought out plan with clearly defined goals is simply a necessity – otherwise, we’re stumbling blind. AI should be treated as a tool to reach those goals. This approach allows us to clearly define the criteria and metrics by which we determine success. It also ensures that the AI strategy fits the company’s strategy perfectly; the business side is aware of the potential ROI, while the ML engineers know exactly how to train their model. In my opinion, this is the most important stuff, but it doesn’t exhaust the list of challenges. Strategy isn’t the only thing that determines the success of an R&D project. There’s also the question of assumptions, data quality, PoC implementation…

AI can be used in planning, database analysis, data mining, mapping, NLP… As a software house, you probably receive a wide variety of inquiries from potential clients. What are the most common questions asked?

Usually it’s a process that doesn’t work as intended. It could be a faulty sales funnel, or an issue with client or employee retention. We consider each case individually, carefully analysing the process, its underlying issues, and the client’s expectations. This lets us figure out what models will be the most effective. Oftentimes, we build recommendation systems for various applications and goals. We’re open, however, to any AI-based solution – as long as it solves our clients’ problems.

I saw that your company organises the Neoteric AI Sprint – can you tell us a little bit more about the initiative and what are its benefits?

Our AI Sprint was created in response to a common complaint we hear from clients: that AI takes too long, eats up too much budget and doesn’t guarantee results. Companies that aren’t used to R&D are understandably leery of investing money into something that can – but not necessarily – bring benefits. We talk a lot about the benefits of AI, but it’s not something that can be set up in five minutes to solve every conceivable problem.

AI Sprint begins with hypothesis validation: a client presents us with a problem, we figure out the “how?” and see whether or not our solution works (and if so, how well). It allows us to demonstrate the potential value AI can bring to a given company. This method of testing lets us quickly determine the likelihood of a successful implementation – which, in turn, helps us avoid wasting time building a mediocre model. 

The entire sprint takes less than a month and it demonstrates without a shred of a doubt whether AI is a good fit (and if not, why), comes with a verified model, the basis of a data strategy and our recommendations for implementation. Deploying AI isn’t just writing an algorithm – it’s also a lot of work with process analysis and building trust among the people who’ll be working with the solution.

Let’s talk tech for a moment. The GPT-3 model proved that with an incredibly large network and an equally enormous dataset we can generate the seed of a general-purpose AI that can quickly be taught translation or summarising. In your opinion, when will we see networks of this type specialised in languages other than English?

The dataset used to train GPT-3 contained other languages, such as French and German. It’s obviously a tiny subset of the entire thing. Specialised models for other languages, although smaller than GPT-3, are already available – such as FlauBert, which is based on French. It’s hard to estimate when we can expect a GPT-3 equivalent for other languages; you’d have to construct a similarly-sized training dataset and have an appropriately large budget. The computational cost of building AI rises exponentially. It’s estimated that training GPT-3 using a single Tesla V100 would cost around $4.6 million and take around 355 years 😉

Interestingly enough, scientists from LMU Munich created a model based on 223 million parameters, which passed GPT-3 by 3% using the SuperGLUE scale.

Do you see any practical applications for a network of this type that would bring a measurable business benefit?

A model of this complexity can be used to great effect whenever NLP/NLU is used – such as in chatbots. We can also find examples of GPT-3 holding its own against Excel and SQL – who knows, maybe it can also provide valuable support for programmers?

If you were to place a bet, which branch of business do you think will first implement solutions based on NLP models with billions of parameters?

We already have such a solution. The biggest GPT-3 version contains around 175 billion parameters. As for trillions, I have no clue – it’d require almost inconceivable amounts of computing power.

If I had to bet on a specific branch of business, I think the law business would be a safe bet; pouring through court records of similar cases, looking for answers, and even generating text. I see a lot of potential for automation and cost reduction. Currently, NLP enjoys the most popularity in healthcare, commerce, telecoms, and high tech companies.

Deep learning – it first rocked the world of image classification, then it went on to revolutionise language processing and automatic speech recognition. Where can it go from there, in your opinion? What will be the next challenges for deep learning? What practical applications can we expect in 2021?

Aside from NLP, deep learning is often used in computer vision. Aside from image classification, deep learning algorithms are well suited towards other problems, such as object detection (the famous YOLO – you only look once – principle). To me, the most fascinating application is 3D pose estimation – extracting a 3D pose from a single, two-dimensional photograph of a human being.

Computer vision and NLP enjoy a lot of research attention, because its effectiveness isn’t yet comparable to what humans can do. It’s close, but it’s still not quite there – personally, I think that’s a great motivator. It’s hard to judge whether 2021 will bring us any commercial applications for 3D pose estimation, although the possibilities are vast: sports performance analysis, creating better models for video games, et cetera. I know some yoga apps are already taking advantage of it – they can judge how well you perform given poses.