
About this Event
Description
Modern generative AI models are capable of producing surprisingly high-quality text, images, video and even program code. Yet, the models are black boxes, making it impossible for users to build a mental model for how the AI works. Users have no way to predict how the black box transmutes input controls (e.g., natural language prompts) into the output text, images, video or code. Instead, users have to repeatedly create a prompt, apply the model to produce a result and then adjust the prompt and try again, until a suitable result is achieved. In this talk I’ll assert that such unpredictable black boxes are terrible interfaces and that they always will be until we can identify ways to explain how they work. I’ll also argue that the ambiguity of natural language and a lack of shared semantics between AI models and human users are partly to blame. Finally I’ll suggest some approaches for improving the interfaces to the AI models.
Biography
Modern generative AI models are capable of producing surprisingly high-quality text, images, video and even program code. Yet, the models are black boxes, making it impossible for users to build a mental model for how the AI works. Users have no way to predict how the black box transmutes input controls (e.g., natural language prompts) into the output text, images, video or code. Instead, users have to repeatedly create a prompt, apply the model to produce a result and then adjust the prompt and try again, until a suitable result is achieved. In this talk I’ll assert that such unpredictable black boxes are terrible interfaces and that they always will be until we can identify ways to explain how they work. I’ll also argue that the ambiguity of natural language and a lack of shared semantics between AI models and human users are partly to blame. Finally I’ll suggest some approaches for improving the interfaces to the AI models.
Event Venue & Nearby Stays
Data Sciences Institute, 10 floor, 700 University Avenue, Toronto, Canada
CAD 0.00