Anurag

Follow

Anurag

Follow
We created Presentation 2.0

We created Presentation 2.0

We redefined the presentation. Take your mic and start speaking, AI will take care of the rest.

Anurag's photo
Anurag
ยทNov 29, 2022ยท

4 min read

Imagine this scenario: You're scheduled to give a presentation to an audience who have little to no knowledge about the topic you'll be discussing, and you find yourself without enough time to create a proper set of slides. As you step up to the podium and begin to speak, you notice that your presentation is being displayed on the screen behind you, automatically updating as you progress through your speech. Pretty impressive, isn't it? ๐Ÿ˜Ž

Do you dread giving presentations? Do you struggle to keep your audience engaged with static, outdated slideshows? Say hello to Dypres, the automated presentation system that's changing the game.

Dypres uses natural language processing and artificial intelligence to generate dynamic presentations in real time based on your speech input. No more spending hours crafting the perfect slide deck โ€“ Dypres does the work for you, by extracting meaningful data from your speech and pulling relevant media and information from the internet.

Dypres is flexible, scalable, and completely customizable to meet the specific needs of your organization. And best of all, it's fully automated, so there's no need for manual input from you.

Imagine giving a presentation that's tailored to the specific needs and interests of your audience, with up-to-date information and engaging multimedia. With Dypres, that dream can be a reality.

This is no more only a concept. We, Bit Lords, a team of four students of Government Engineering College, Thrissur, Kerala, India, created Dynamic Presentation or DYPRES as we named it. It was on the Litebytes hackathon conducted by IEDC GEC and we won third prize.

The Bit Lords team consists of Anurag (myself), Aqeel, Majid, and Sai, all pursuing our Engineering degrees in Computer Science from GEC Thrissur.

The front end of DYPRES is entirely built on React, a popular JavaScript library. Currently, the software has only two pages, namely the home page and the presentation page (named Playground).

Mammootty example

The process for generating content using our system involves a few steps. First, we take the audio input from the speech and convert it into text. From there, we extract the relevant keywords and pass them on to our AI. Using these keywords, the AI generates content on the given topic, which is then displayed on the screen in a pre-defined format. We've created two different themes that users can choose between.

As the AI receives new keywords, it updates the screen at a predetermined pace, constantly refreshing the information being displayed. This process continues on a loop, ensuring that the information presented to the audience is always up-to-date and relevant to the speech being given.

Sundar Pichai example

Our system relies on a few key components to function properly. To start, we utilized real-time Speech to Text recognition software to convert audio into text. From there, we employed a keyword extraction package to identify the most important terms and phrases within the text.

The most critical aspect of our system is the use of OpenAI's GPT-3 language model. We leveraged this powerful tool to perform the heavy lifting behind the scenes, allowing us to generate relevant content based on the extracted keywords. OpenAI offers an API that we were able to integrate into our system, enabling us to take full advantage of the model's capabilities. In fact, GPT-3 is so versatile that we could even use it to perform the keyword extraction step if we wanted to.

Sundar Pichai example

Project Abstract

DYPRES - Dynamic Presentation using Natural Language Processing (NLP) & Artificial Intelligence (AI)

Project Topic: Automated presentation using Natural Language Processing and Artificial Intelligence

Our project represents a significant advancement in the field of presentations, as it allows for dynamic content to be generated in real-time based on speech input. This is accomplished through a multi-step process that involves the use of several cutting-edge technologies.

First, we utilize a speech-to-text engine to convert the spoken word into written text. From there, natural language processing is used to extract meaningful data and keywords from the text. Using this data, we gather media files and information from the internet. Finally, we employ AI to select the most relevant content and create a dynamic presentation.

This approach offers several advantages over traditional presentations. By tailoring the content to the audience in real-time, our system ensures that the information being presented is always up-to-date and relevant. Additionally, our system is fully automated, scalable, and highly customizable, making it suitable for presentations of any size and complexity.

Overall, our project represents a major step forward in the world of presentations, and has the potential to revolutionize the way that organizations communicate their ideas and information to the world.

Bit Lords Group Photo

Did you find this article valuable?

Support Anurag by becoming a sponsor. Any amount is appreciated!

See recent sponsors |ย Learn more about Hashnode Sponsors
ย 
Share this