ChatGPT

ChatGPT

European ChatGPT assistant to aid doctors with medical, administrative, and personal duties

European ChatGPT assistant to aid doctors with medical, administrative, and personal duties

Dr Tonic, a new virtual assistant, has been launched by Tonic App to support medical doctors on their day-to-day tasks. Dr. Tonic is powered by ChatGPT API, the large language model trained by OpenAI.

The virtual assistant is integrated with Tonic App, a 360 tool for doctors that curates medical knowledge, clinical tools, educational content, jobs and more, used already by 119,000 physicians. Dr. Tonic is a cheerful virtual assistant that prefers using medical terminology and to give deterministic answers, compared to the generalist version of ChatGPT.

Dr. Tonic is making it easier for doctors to retrieve knowledge, to summarise medical records and key findings in studies, write templates for patient referral letters and emails, prepare health information for lay people, and even support them in planning their holidays or meals.

In the first 24h after the launch, an average 414 words were exchanged with Dr. Tonic by doctor – and growing. The top use cases have been 1) medical knowledge retrieval, 2) questions about medical studies, 3) utilitarian questions, 4) inquiries about Tonic App and 5) queries about the reliability of the service.

Even though ChatGPT was trained with a huge body of medical knowledge, it cannot yet be used safely for diagnosis and treatment decision-making. This is where Tonic App’s clinical content and previously acquired data becomes synergistic with the virtual assistant, as it was produced according to the highest standards so that it can be safely used for clinical decision-making and to feed the model.

Dr. Tonic is also useful in reducing the administrative burden of medical doctors, which has been increasing, as healthcare moves towards value-based care that requires detailed tracking and reporting of quality metrics.

“We are excited to launch Dr. Tonic, because we believe this is the start of a new era for healthcare. AI finally enters the day-to-day operations of healthcare professionals, after years of promise”, says Daniela Seixas, CEO of Tonic App, and herself a medical doctor. “It is estimated that up to 40% of doctors’ time is desk work, particularly in primary care. We can now help medical doctors save real time, reduce stress, and focus on what matters the most: patients. AI is also set to help compensate the shortfall of healthcare workers in Europe.”

Beyond the obvious benefits, AI-assisted tools raise challenges that need to be addressed. In terms of data privacy, third party personal data, such as patients’ identifiers, cannot be shared with the bot, and GDPR compliance must be assured. At Tonic App, we do not allow the larger OpenAI generative model to learn with the data created by our doctor users through our dedicated API.

Dr. Tonic is now available for professional use at Tonic App in France, Italy, Spain and Portugal, and soon in the UK. To learn more about how it works and how it can benefit your practice, visit www.tonicapp.io.

Tonic App supports medical doctors in diagnosing and treating their patients, by bringing together all the professional resources they need for their day-to-day work in a single mobile platform. Tonic App makes clinical practice even more practical.

The 37 people company was co-founded in Porto in 2016 by Daniela Seixas, CEO, and Andrew Barnes, Christophe de Kalbermatten and Dávid Borsós.

Tonic App has been named by Forbes magazine as one of 60 female-led startups that are “shaking technology around the globe”.

 

For more information visit: www.tonicapp.io

ChatGPT is a leap forward but not the massive breakthrough everyone thinks it according to Elerian AI

ChatGPT is a leap forward but not the massive breakthrough everyone thinks it according to Elerian AI

Since the ChatGPT prototype launch in November, there’s been massive interest in its application across industries, including customer service. While OpenAi’s ChatGPT does seem to take a massive leap forward and continually improve, Elerian AI CTO, Alfredo Gemma, disagrees that it’s the breakthrough everyone thinks it is – although it’s an impressive milestone on the road to AGI.

Artificial general intelligence (AGI), the ability of an intelligent agent to understand and learn any intellectual task that a human being can, still requires Deep Learning (DL) architecture to generalise effectively to work.

Says Gemma, “Large Language Models (LLM), such as the one powering ChatGPT, remember everything up to the point at which their training stopped. The question then becomes whether the system is capable after that of human-like generalisation capabilities needed to achieve an AGI and the likely answer is no.”

Human intelligence can be considered a combination of specialised intelligence (linguistic, emotional, logical-mathematical, spatial, bodily-kinesthetic, musical, etc.), leveraging memory in a particular way. The ability to generalise our knowledge is a fundamental aspect of human intelligence: humans can extend and apply the knowledge acquired in a specific context to other contexts when we identify similarities. Generalisation is only possible if one can identify the context, which is only remembered through memory. Memory is a requirement for intelligence.

To generalise, an intelligent system must be able to instantly repurpose its existing cognitive building blocks to perceive completely new objects or patterns without having to learn them, that is, without having to create new building blocks specifically for them.

In the end, the real problem with LLMs like ChatGPT is a structural one, which depends on the underlying architecture of the neural networks: the Deep Learning (DL) architecture. The biggest problem with DL is its inherent inability to generalise effectively. Without generalisation, edge cases are an insurmountable problem, something that the autonomous vehicle industry found out the hard way after wasting more than $100 billion by betting on it and still needs to produce a fully self-driving car.

Says Gemma, “Some in the AI community insist that DL’s failure to generalise can be circumvented by scaling (like it is done when LLMs are created), but this is not true. Scaling is precisely what researchers in the self-driving car sector have been doing, and it does not work. The cost and the many long years it would take to accumulate enough data become untenable because corner cases are infinite.”

There are many cases in which ChatGPT was requested to write articles on various topics. The result was decently well-written in almost all these cases, but it needed to be corrected. Every version of the story, even if prompted multiple times, contained errors that the chatbot couldn’t identify when engaged in conversation. ChatGPT is prone to fabricating answers if its knowledge doesn’t cover your request, even when you’re not asking it to write an article.

Concludes Gemma, “Bottom line, cracking generalised perception and the DL architecture needed to achieve that is still an open problem and would be a monumental achievement. For now, ChatGPT is exciting but not exactly a massive game-changer.”