“There will most likely be no ‘perfect world’ with or without AI, but we can help guide a community of innovation and fellow innovators through responsible practices.” —Karol See, Cascadeo AI Head of Product
Five Questions for Karol See, Cascadeo AI Head of Product
Cascadeo: You were recently made Head of Product for Cascadeo AI, Cascadeo’s multi-AI powered cloud management platform. What’s the most interesting part of working on Cascadeo AI?
Karol See: The endless realm of possibilities with regards to the multiple facets of technology involved, such as but not limited to cloud innovation, user experience, artificial intelligence, and data management has me constantly eager to keep moving.
Cascadeo: What excites you most about the current advances in generative AI technology and opportunities for gen AI integrations?
Karol See: It’s always been interesting whenever a certain technology gains momentum into the general public. A lot more communities are going to be able to utilize something that seemed too far out of reach. With GenAI’s capability to not only generate basic texts, but to be able to communicate in multiple languages, that opens up a lot of borders.
Take for example these initiatives: Isdapp and Philrice Text Center. Once these technologies are able to communicate, explain, and expand further, reaching these communities of farmers and fisherfolks from different backgrounds, through different languages and dialects, that would be momentous. It won’t happen overnight, but we will reach that point wherein these touch points will be embedded as seamlessly as having an email or mobile number.
Cascadeo: What drew you to working with gen AI in the first place?
Karol See: I’ve always been fascinated with data, logic, and the art of natural language processing. Growing up in a time wherein we were left to understand technology because our parents couldn’t and proceeding to study the art of communicating to technology via programming and code, I found the two-way street interesting. Explaining to the computers what I need it to do, and explaining to other humans how I need to speak to the computers before they can “use a feature”. And I’ve come to utilize GenAI as another entity that’s capable of human language built on top of logical networks.
Cascadeo: In the past, you’ve mentioned that you treat interactions with generative AI like conversations. Can you say a bit more about that?
Karol See: With tools like AWS Bedrock’s playground, ChatGPT, or Google Bard, you are basically given an empty slate as a starting point. It’s like getting to know a new person each time (or each thread). With the vast amount of knowledge Generative AI has, you can allocate any specialization and personality on any given thread. You don’t need the perfect prompt as soon as you open a chat, but you can steer it together to a specific output you’re aiming for. Communicating with Generative AI tools is, in a way, a collaborative learning experience. Go forth, push the boundaries, and have fun!
Cascadeo: You’re interested in AI ethics. What do you think AI ethics would look like in a perfect world?
Karol See: As AI becomes more mainstream, and as we embed Generative AI (and our own ML practices), into the workstream of Cascadeo AI, we need to ensure and hold ourselves accountable that whatever we put (data in/data out), is used for good. That we only collect and use relevant information, and that we learn from a wide range of audiences from different cultures and accessibility standpoints. To guide our customers, and our team, through the realm of AI and data privacy; with high regard for ethical practices, sustainability, and accountability. There will most likely be no “perfect world” with or without AI, but we can help guide a community of innovation and fellow innovators through responsible practices.
Bonus Cascadeo sub-question: What do you think are the most essential considerations organizations should keep in mind as they begin to build their own AI ethics frameworks?
Karol See: Coming from a minority in tech, female and Asian demographic, the one that resonates most to me and what I’d want to ensure will be practiced internally and through clients is data biases. We already live in a world where biases, stereotypes, sexism, and racism exist. If AI, by definition, is to have human-like intelligence, ideally, it’s humanity that’ll encompass and take into consideration all personas.