Artificial Intelligence Is Already Shaping Hawai‘i

Five local experts discuss how AI is helping to synthesize data, generate ideas and handle tedious tasks.

On Aug. 30, Hawaii Business Magazine gathered local experts on artificial intelligence to discuss AI in Hawai‘i at Mid-Pacific Institute’s Bakken Auditorium before an audience of students, educators and other adults. This is a condensed and lightly edited version of that discussion, which I moderated.

PETRANIK (editor and executive publisher of Hawaii Business): Before I start my questions, a quick poll of the audience and panel. Many of us have mixed feelings on AI, but my question is: Which way do you lean, more excited or more scared? First, a show of hands from everyone who is generally looking forward to AI and its uses. (pause) I see three-quarters of the audience and everyone on the panel is more excited than scared. OK, who is with me and more scared than excited? (pause) Just a quarter of the audience.

We have high school students in our audience, so let’s start my questions with AI and education. Mark, if I was a teacher, I would be terrified my students would use AI to do their homework instead of doing it themselves. But you are excited about AI in education. Tell us why.

10 23 Hb Ai Roundtable Web 600x600 Mark Hines

Mark Hines

MARK HINES (director of the Kupu Hou Academy at Mid Pacific, with a doctorate in educational technology and over 40 years of experience as a math, science and technology teacher and technology coordinator): As a proponent of student-centered learning, I think concerns about cheating are tied to structures in lots of schools that make teaching and learning in classrooms in 2023 look almost identical to classrooms of 1923. Yet, our understanding of learning has evolved. So the kind of work we give students is the key to that answer.

I ask adults who cheated, “Why did you cheat?” They said, “Because the work didn’t have meaning. For me, it was busy work.” Students, I hope your experience has not been like that. Part of the work I do in the community is helping teachers and schools think about how to create learning that’s meaningful, because when something’s meaningful, we’re willing to do the work.

10 23 Hb Ai Roundtable Web 1200x824 1

Mark Hines of Mid-Pacific Institute, right, says AI will only increase everyone’s need to learn the difference between good and bad sources. Chase Conching is on the left | Photo: Aaron Yoshino

Kids will spend the entire weekend learning to do a skill they really want. So when there’s meaning, the work is worth doing. The harder answer for educators to nuance out is, “In what ways can we create learning that cheating isn’t an option, because the work has meaning for students.”

PETRANIK: Ed Sniffen, as director of the Hawai‘i Department of Transportation, you and your team are responsible for the state’s transportation infrastructure, including our highways, airports and commercial harbors. How are you using AI?

10 23 Hb Ai Roundtable Web 600x600 Ed Eniffen

Ed Sniffen

ED SNIFFEN: For a state agency in transportation, there are tons of data coming through daily that we can act on. Previously, our modus operandi was to make sure we reacted to things, to make sure as few of you as possible got stuck in traffic and as many of you got to your destinations as safely and as efficiently as possible.

We had to transition from being reactive to being proactive, but we can’t be proactive with all that data, no matter how many people we hire. So we use AI as a filter, making sure we can get all that data into an AI system where we can consume it and make decisions. We had to train our AI platforms to ensure they helped us act before an incident or accident, to prevent it.

PETRANIK: Does more data make your AI better?

SNIFFEN: Yes. In the past, we would hold off on collecting data because we couldn’t do anything with it. But as we advanced in the way we filtered our data, we started pulling in more data and worked with Oceanit and then PICHTR (the nonprofit Pacific International Center for High Technology Research) and enlisted high school students to help us and to provide us with different sets of eyes.

PETRANIK: I want to take another poll, this one only for students. As you think about what career you might want, are you also thinking about how AI is going to influence that career, whether it changes the nature of the job or eliminates some jobs in that field? If yes, raise your hand. (pause) That’s about every student. Are there any students who are not thinking about how AI is going to impact their possible career? (pause) Just one.

Summer Rankin is a computational neuroscientist who investigates the boundaries of AI and drives data science solutions for federal government clients. She has a doctorate in complex systems and brain sciences and works as a senior lead data scientist at Booz Allen Hamilton’s Honolulu Chief Technology Office. Summer, how are you and your colleagues using AI in Hawai‘i?

10 23 Hb Ai Roundtable Web 600x600 Summer Rankin

Summer Rankin

SUMMER RANKIN: I started in D.C. at our sort of headquarters, and they moved me here three years ago because we wanted to build an AI team. We’ve gone from just me to almost 30 people. We have a lot of projects that involve AI building models, data strategy and many other things. And not just focused locally, but we are very connected to our colleagues in Japan, D.C., California and elsewhere because our clients are also global.

Things we work on here include building a search engine for old documents that are now just hard copies, and they need to be scanned in and stored, and they need a search engine too. The scanning is a separate thing called optical character recognition. That’s like computer vision.

We also build chatbots, maybe for clients that have a public facing website where they want citizens to engage, so they need a Q&A thing.

And we use computer vision to detect things in images and video – think drone videos or traffic cameras. And there’s also a lot of digital signal processing – acoustic signals where you can detect the presence or the absence of something using a time series analysis and anomaly detection.

We work with health care. I did a project with the FDA where we looked at adverse events following blood transfusions by analyzing health records. We looked for anomalies using NLP (natural language processing) to find out, basically: Where did the doctors not know this or that was the reason for an outcome until after the fact?

10 23 Hb Ai Roundtable Web 1200x824 2

Summer Rankin, left, says every new technology eliminates some jobs and creates others – and AI will also do that. | Photo: Aaron Yoshino

Another thing we do, especially for local folks, goes by a lot of names including “digital transformation.” It can be when someone says, “We want to be in the cloud, but right now, we’re doing everything on local computers.” We go in with a team and do a whole assessment: interview people to understand where they are, where they want to be, write reports, make recommendations and maybe help with implementation.

We’re also becoming an AI advisor to people about: Which model is less biased? Which model is more ethical? Because, make no mistake, we call it AI but these large language models are built on the backs of humans to give it the guardrails that it does have.

PETRANIK: Ian, when I started planning this panel, you were the first person I thought of because you’re well informed about local innovations and you are also forward thinking. So tell us about AI in Hawai‘i.

10 23 Hb Ai Roundtable Web 600x600 Ian Kitajima

Ian Kitajima

IAN KITAJIMA (an executive with over three decades of experience in the innovation and development of advanced technologies, including 21 years at Oceanit. Today, he is the president of PICHTR, the Pacific International Center for High Technology Research, a Honolulu-based nonprofit): When I think of innovation in government, I think of Ed. When I was at Oceanit, we held an event to show how artificial intelligence could be used and one example was on classifying vehicles. Ed saw that and he said, “I want to look at how to apply AI to our traffic studies.” So early on, we worked with him and his team to apply artificial intelligence/machine visioning to traffic studies along the Nānākuli-Wai‘anae corridor.

Later he asked: “Can you do speed studies using this technique?” So we did speed studies in that area and eventually along Pali Highway. Now the technology is on smart intersections: How do you create analysis around how close vehicles are getting to each other and to people – are the vehicles getting heart-stoppingly close to pedestrians? Because if you quantify that and you see numbers going up, you’re going to have an accident.

We’re going to use dashcams with students to do road maintenance analysis. I think there are so many potential applications.

PETRANIK: Chase, your work involves helping local businesses, nonprofits and the community. You did a lot of work using tech and innovation to help the community during the peak of the Covid pandemic in Hawai‘i. How are you using AI now?

10 23 Hb Ai Roundtable Web 600x600 Chase Conching

Chase Conching

CHASE CONCHING (principal and creative director of Library Creative, an aio Digital company and sister company of Hawaii Business): At Library Creative, we’ve been using AI tools, tools like ChatGPT, Claude, Midjourney and Stable Diffusion to kickstart our creative process when we’re working with clients.

If there’s a branding project, a website or marketing project we’re getting off the ground, we’ll use AI to help us brainstorm ideas. It’s not meant to replace your work as a human but allows you to do those uniquely human tasks, the creative and empathetic tasks, by automating what I call the four D tasks – those that are difficult, dangerous, dirty and dull – tasks that humans really shouldn’t be doing.

By automating those tasks, we’re able to focus on things that are of higher value to us while building things for our clients. Like Summer said, we are helping to build things like customer service chatbots and data projects – first and foremost doing things responsibly, equitably and ethically. That includes things like data privacy for our financial or healthcare clients, and making sure we have humans on both ends who double check the responses that these AI language models are producing.

Also exciting is the work we’re doing on the community front, especially in the Native Hawaiian community, which I think has been underserved when it comes to new technologies. In one case, we’re discussing how we can use AI models and drone photography in Hawai‘i to optimize crop yields and water management.

Another cool thing we’re helping to work on is to create an accessible digital Hawaiian dictionary that businesses and state agencies can tap into, to ensure they’re using Hawaiian words correctly and authentically on their websites, for example. AI makes possible things that previously have been hard to accomplish.

PETRANIK: Mid-Pacific asked its students to submit questions for this event and we got a lot of great ones. One theme that had multiple questions was about controlling the dangers of AI. How do we restrict AI from being used for malicious purposes? How do we avoid losing control over AI – to prevent it from taking over decision-making from people?

RANKIN: ChatGPT is amazing, I love it, but it’s not “The Terminator.” We’re not there yet and we may never get to true artificial general intelligence.

I think that what AI can be and is being used for a lot of the time is to help us understand the data around us faster, and potentially more accurately, so then humans can make decisions. A lot of people in AI are very, very committed to keeping major decisions with humans across the board.

There’s a lot of skepticism against putting AI models in place of a human when it comes to big decisions. I think that’s basically how we avoid that. I don’t see a huge appetite for turning over control to AI. The other thing we can do – and this is a society question – is people have to be held accountable. If you drive a car, and the car malfunctions, then the maker of the car is held accountable. If your AI model is hurting people or behaving in a way that you did not tell people it was going to do, then that should be your responsibility.

It’s important for people to think twice about what models they use and when and where they’re using them. What data was it trained on? Yes, there are going to be bad actors. I don’t know the answer to that because we don’t regulate free speech and we don’t regulate the internet in that way. But people do moderate places on the internet and you want to go to places that are moderated, so AI models will go the same way.

We have to think about what model to use. I’m not going to use a generative AI model to pull out a fact. I might use a search engine to pull up a document that I then read, but I’m not going to expect a generative AI model to give me this accurate information. We need to get understanding about that to people – consumers, students and developers. They have to understand that generative AI should not be used for that purpose. (Definition: Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.)

SNIFFEN: As to whether you should use AI in school: I say absolutely. You should use every tool that’s available to you, however you can, because when you come to the workforce, I’m going to expect that you’d be using that tool for us as well.

10 23 Hb Ai Roundtable Web 1200x824 3

Ed Sniffen, center, says students should learn to use AI “because when you come to the workforce, I’m going to expect that you’d be using that tool for us as well. | Photo: Aaron Yoshino

PETRANIK: Another common theme in the students’ questions are jobs. Will AI leave enough good jobs for humans?

SNIFFEN: We are short 25% of our workforce and that’s just in the Department of Transportation. Across the engineering industry, there’s that shortage as well. So the answer is absolutely no: It’s not going to take away jobs, especially high-paying jobs within the industries that we work in.

But it’s going to make your jobs way, way easier. Allowing AI to do its job gives you the flexibility or the freedom to express your creativity and your critical thinking because you don’t have to consolidate all the data before you make decisions.

HINES: It goes back to what I mentioned earlier about models of education. One thing we are doing at Mid-Pacific – I know this is true for many schools – is moving away from thinking of learning as passing along a canon of facts.

Instead, we help develop human potential, giving each student an opportunity to develop their own creative potential to know what they are passionate about and what they want to do.

KITAJIMA: I think it was Alvin Toffl er who said: The illiterate of the future will be those who cannot learn, unlearn and relearn.

RANKIN: Every generation has advances in technology that make certain jobs go away and certain jobs come up. When I was a kid, you could get a job just typing stuff for people. You probably wouldn’t hire a full-time person to do that now. So think about AI as more of a catalyst for new jobs, rather than killing old jobs – allowing people to do something different.

PETRANIK: Can I add something else to your education plate: We need a values firewall, because it’s the values you bring to AI that determine how it’s going to be used. HINES: So many schools feel like, “We need to do values education, so we need a valid ethics course.” It was treated in the way that schools always take things apart and say, “We’ll train that skill as a separate thing.” But we talk a lot about embedded values – learning in integrated approaches – because that ties the learning together with why it’s worth learning and why it’s worth bringing values to the work you do. So that needs to be part of the entire learning experience, not just a course in ethics. Not that that doesn’t have a place.

RANKIN: I couldn’t agree more. Especially at the university level, we used to joke about how, if you’re a computer engineer, you don’t even take an ethics course – that’s just not part of the discourse. And we often say that might be contributing to part of the problems that we’re having today is because we have these people who aren’t really thinking or necessarily educated in what we call social sciences and thinking about ethics. And it should be a thread throughout, especially for developers, but also U.S. citizens and end users and consumers thinking about the ethics. It’s not “Can we do it?” It’s more like, “Should we and how?”

PETRANIK: Another common theme in the questions submitted by Mid-Pacifi c students was: How do we protect ourselves from misinformation spread by AI? Social media has already undermined our democracy; will AI destroy it?

HINES: When I hear these kinds of questions, I think: Is there a past corollary to that? Back in the ’80s and ’90s, we had encyclopedias and that was knowledge. And we had the National Enquirer, and that was knowledge too. (audience laughter) As a science teacher, I remember conversations with students about “I heard about aliens being kept here,” or I heard “That’s going to cure this particular illness.” I would ask: “Where did you get that from?”

Misinformation has always been with us. I think schools have realized students need more than just a one-time interaction with a library specialist who will tell them how to tell the difference between good sources and bad sources. I love the work of Howard Rheingold, who talked about crap detection, pardon my language. Everyone needs a good crap detector.

We need a set of skills that help us ask, How do I know that what I’m reading is factual?

PETRANIK: Here’s an interesting question: Will AI promote the loss of human connection? I think social media has already undermined human connections in many ways.

RANKIN: Is a connection over social media not personal?

PETRANIK: It is a different form of connection. So do you see AI creating different forms of personal connections?

KITAJIMA: I would say it would undermine the connections we grew up with, but not the connections that are forming now. It changes the dynamics.

I’ve been working in Korea for many years, where you have a rapidly aging population. Millions of seniors live alone, and the government is responsible for them now because their families are not taking care of them. So they’re developing AI companions that seniors can converse with. On the back end, it’s ChatGPT and voice activated. They’re having conversations with these systems because they’re lonely.

10 23 Hb Ai Roundtable Web 1200x824 5

Ian Kitajima, left, says AI provides lonely seniors with conversational partners, which has psychological and medical benefits. Steve Petranik is at right. | Photo: Aaron Yoshino

There’s further value in that because I learned as my father was aging, that if you don’t use your voice and vocal cords, you start losing your ability to swallow and will have problems choking. So there are so many opportunities for AI to help us live longer and be more engaged.

CONCHING: There are senior centers across the country that are using this sort of AI therapy. While it’s not medically certified, they’re using it so elder patients have a pseudo human connection.

There are many possibilities: Imagine teachers speaking in their native language to learners who speak a different language, but AI translates in real time. That allows a human connection, not only on a personal scale, but on a global level.

PETRANIK: Another question from the students: If AI takes information and images from many sources, does that count as plagiarism? Does it count as copyright violation?

HINES: One of the things I appreciate about Bing is that when it gives you an answer, it also cites resources it used.

RANKIN: This is where we run into the problem of the black box. If it’s a neural network, with multiple hidden layers, you cannot look back into it. It’s not like a decision tree where I can go back in the model and see exactly why it made this decision or what source it used, which is why it’s important to understand what data it was trained on. That’s a place you could start. But it’s tricky. You would never know with 100% accuracy – the way you could with like a non-neural network – what feature it used to come up with that answer.

CONCHING: Coming from the creative field, it’s an exciting time – with these generative AI models that can produce hyper-realistic images. And now there are new tools generating text to video. Now they look bad, this is the worst it’s ever going to be. Next month, it could be really good. But there needs to be a human element added.

PETRANIK: Do you think artists will be pushed by AI to be more unique and different, because AI works of art are created from what was done before.

CONCHING: 100%. I feel AI tools do a good job of getting us part way there. In my company, it helps us brainstorm ideas that maybe we wouldn’t have had perspectives on or experience with. Then we build on those initial ideas to help us create something new for our clients. Those initial AI generated concepts are a super helpful tool.

PETRANIK: Last question is about practical advice and tools. What’s available now that ordinary people can use?

KITAJIMA: I have the paid version, ChatGPT Plus ($20 a month for individuals). I’ll get large documents, like RFPs (requests for proposal) that are 50 pages long but without much of a summary. I’ll upload the document and ChatGPT can summarize the file in simple terms and provide highlights in 60 seconds. It saves me a tremendous amount of time.

HINES: Our students taught me that when I sign up for a service and there’s like 13 pages of terms and conditions, give it to ChatGPT and it gives me a summary that’s easier to diagnose.

I blew up my ACL in March and got the first report from the MRI – it was 13 pages and impossible to understand. I asked ChatGPT to summarize it. Just be careful about giving your personal information.

The third one I do all the time: I’ll look in my refrigerator and see I’ve got six ingredients. So I tell that to a related app called ChefGPT and say that I want, say, Asian-influenced food, and it spits out recipes.

CONCHING: I love it, ChefGPT. There’s something I use more and more: a free app on my iPhone called Pi. It’s designed to be friendlier and more empathetic than most large language models. I used it on my drive to Mid-Pacific today to help me prepare for this discussion. I said, “I’m taking part in this discussion and here are questions that will be asked of me,” and I had an actual voice conversation with what’s almost like a real human person.

10 23 Hb Ai Roundtable Web 1200x824 4

Chase Conching, left, says he is using an iPhone app called Pi that let’s you have more emphathetic conversations with AI than other models he has tried. | Photo: Aaron Yoshino

So if you have to give a speech and need feedback on it, or if you are having trouble writing a text to a friend, it’ll recommend text for you that is generally friendlier and more empathetic than other tools like ChatGPT.

I like that it sounds like I’m talking to a real person – a back and forth conversation – and it will recall what I said 10, 20 minutes ago. Super helpful in helping me have these in-depth, thoughtful conversations.

 

 

Categories: Innovation, Technology