AI in Manufacturing: Expert Insights on Implementation, Security and Workforce Preparation
Originally published on December 22, 2025
“I think it’s inevitable so whether we are prepared for it or not it’s coming and the question becomes how does a business owner prepare the workforce for it.” — Julie Kniseley, HR Services Leader at James Moore
In this episode of Moore on Manufacturing, Mike Sibley and Kevin Golden sit down with three James Moore experts to discuss the practical realities of implementing AI in manufacturing. Julie Kniseley, Tomas Sjostrom and Daniel Shorstein share insights on overcoming workforce resistance, protecting your business from cyber threats and finding the right starting point for AI adoption in your organization.
Related Resources
Full Episode Transcript
[00:08] Mike Sibley: Hi, I’m Mike Sibley, leader of the James Moore manufacturing team. I’m here with Kevin Golden, one of my partners and also a member of our James Moore manufacturing team. Today, I’m very excited that we are joined by three of our James Moore experts. We have Julie Kniseley who leads up our HR services.
[00:27] Mike Sibley: Our HR services is really what we think about as strategic human capital building that workforce now and in the future everything accountability leadership, recruiting, retention, all those important aspects. Julie handles that for our clients. Thomas, he leads our technology services so when we talk about outsourced IT, managed IT, cyber security, especially 24/7 monitoring, all things critical to security and managing IT networks.
[00:57] Mike Sibley: Thomas and his team handle that. And then we have Daniel Shorstein who leads our digital team. So digital is involved at, you know, kind of make it easy, but business intelligence, data analytics, data automation, building dashboards, taking your data and putting it in front of you in a way that you can make really good business decisions really quickly.
[01:21] Mike Sibley: And that involves building out using generative AI, which is a lot what we’re going to talk about today. So, we’re talking all things AI. I would be remiss if I didn’t mention that these three are actually going to be in a webinar to dive much deeper than we can get in this session on June 5th. So, highly encourage everybody to sign up for that webinar.
[01:40] Mike Sibley: But we’re going to talk about AI because AI is everywhere. AI means different things to different people. We’ve got how to use it to make your life, to make business more efficient. We’ve got issues with HR and, you know, what people think about this and how’s it going to impact people. That’s a really important part that often is left off.
[02:00] Mike Sibley: Then of course there’s all sorts of security issues with, you know, nowadays I hear that, you know, people are writing scripts and putting it into the LLM and those scripts could have some danger built into them or people are sharing confidential information and how do we lock that down. So there’s a whole host of issues.
[02:19] Mike Sibley: Unfortunately, like I said, we’ve got three of our experts that are going to be able to weigh in on different aspects of this today. So, Julie, Thomas, Daniel, thank you for being here and welcome.
[02:28] Julie Kniseley: Thank you.
[02:29] Tomas Sjostrom: Thank you.
[02:30] Daniel Shorstein: Thank you, guys.
[02:31] Mike Sibley: It’s great to have all of you. Appreciate you spending your time with us.
[02:38] Mike Sibley: You know, artificial intelligence, that’s a very broad term. It’s like saying accounting or pick your industry of preference. There’s a lot underneath there to digest, right? So, it’s very, very broad. I mean you see it all over news media outlets online, everyone’s talking about it so you can go in a lot of different directions. But for today, right, for today I kind of want to talk a little bit about well when we say artificial intelligence, the purpose of this conversation, what do we really mean? So maybe Daniel you can kick us off first, what do we mean when we say AI when it comes to whether it be the digital, HR or IT, what does that really mean? How can you define that for us?
[03:07] Daniel Shorstein: Yes, AI’s been around for quite a while. I like to define artificial intelligence as computers acting as humans and making decisions similar to how humans would make those decisions. So if you think about traditional computer programming and functions, not AI, there’s an input and then a specific output. So like you type in a calculation, it gives you an output of that calculation.
[03:37] Daniel Shorstein: You click a few things and you know what’s going to happen. With artificial intelligence, it acts a little bit more like a human would act and the output is not always going to be as deterministic. But it depends on a whole bunch of factors. And not to go way too in the weeds, technically, depending on the type of artificial intelligence.
[03:56] Daniel Shorstein: There’s a whole slew of that. One of them being generative artificial intelligence. The output may be creative and novel and new, which is more what you’ll see in generative AI, or it could be something that is output as just a yes or no or a prediction depending on the type of AI model.
[04:15] Daniel Shorstein: But there’s a whole slew of things that fall under the umbrella of AI.
[04:17] Mike Sibley: Yeah. No, great. That’s great kind of I think overview for our audience and for what we’re talking about today. So now let’s try to keep it a little high level because again I know we’ll probably in some of these topics get a little bit weedy but let’s keep it at a high level.
[04:32] Mike Sibley: Maybe, you know, Julie, I’ll start with you as I hop around here a little bit. You know at a high level from your point of view in that strategic human capital realm in the HR realm, what are maybe some pros and cons of embracing AI in the workplace in people’s lives and more specifically maybe in a manufacturing environment that, you know, business owners, manufacturer owners should consider? Maybe things they’ve thought of, maybe some a few things they haven’t.
[05:05] Julie Kniseley: Well I think it’s inevitable so whether we are prepared for it or not it’s coming and the question becomes how does a business owner prepare the workforce for it when it’s that it’s kind of that fear of the unknown right it’s very scary and people are hearing that jobs are going to go away or maybe I’ll get laid off or something’s going to happen or I’m going to get replaced. And in order to kind of take those fears and stamp them down I’m 100% biased of communication, communication, communication. You have to start the conversations with your workforce early. You have to figure out how even you’re going to use AI. I talk to clients all the time and right now some have not really dabbled a whole lot in AI.
[05:38] Julie Kniseley: In AI or they think it’s, it’s, it’s available to help you write an email and that’s the extent that they might be using AI and it goes so much deeper and it is happening and it’s like evolving and becoming smarter so quickly. I think that even if you tried to have a plan today for what you’re going to do today, it’s going to change in six months.
[06:00] Julie Kniseley: So I really think that it’s important from a workforce perspective and a human perspective then we don’t like change. We resist change. So we’ve got to get that workforce prepared early and often and engage them in the conversation so that they can assist with figuring out what AI can do. How can it make their jobs easier, better, more efficient, etc.
[06:21] Julie Kniseley: How can they all win together? It might change their job, their focus may be different, but it’s there to as an enhancement, not a replacement, right? So I just think that right now companies are trying to figure out where they’re going and what they’re going to do with it, and they need to start those conversations now with the employees, too, and say, “We’re not sure where we’re going, but we’re having a conversation. We’re looking into it, and we want you to participate.”
[06:46] Mike Sibley: So Julie, you know, just jumping off that for a second. One of the things I often think about, you know, we do, and we see this, you know, going back years, forgetting about AI, but we do process improvement and we make, we cut out wasteful activities.
[07:04] Mike Sibley: And the first thing people worry about is, oh, well, if I don’t have, if I have less work to do now, what am I going to do? Are they going to get rid of me? And I think it’s kind of the opposite in my mind is we’re opening up more capacity so we can get more throughput. So we can do more with less. I mean, is that kind of part of the conversation?
[07:25] Julie Kniseley: Absolutely. Absolutely. And it’s that maybe it’s the things that an employee doesn’t like to do, you know, the part of their job that’s the most frustrating or the most time consuming. Well, AI might be able to help with that. And so then they can do the things they do enjoy. So it can shift the jobs in very positive ways. But again, we have to be able to have those conversations and tell that story upfront.
[07:43] Julie Kniseley: The less we talk at the beginning, the scarier it’s going to be for people. And the people actually doing the work are the best ones to understand the impact that AI can have in their role and how it can help them. So if we’re not asking them, somebody that’s 30,000 feet doesn’t necessarily know what’s going on day-to-day. And that’s why we need to have that communication early and often.
[08:06] Mike Sibley: Well, and that’s a good point, Julie. I mean I know on this show we’ve talked about before and we’ve talked about, you know, connecting operations maybe what’s going on in the warehouse with what’s going on upstairs and the financials and a lot of time we just simply talk to your folks, talk to those that are involved because they’re going to tell you, well here’s a problem, here’s a problem. They may not have a solution always but they can help you identify where the sources, same thing here. AI sounds like it’s no different is incorporating not just certain levels or management levels but all folks within your organization.
[08:34] Mike Sibley: Especially those like you said, are going to see the greatest impact to their day-to-day job and how it could benefit them, especially to having that conversation and that communication upfront to make that hopefully an overall better adoption if you will of AI or the next, you know, iteration of it as it constantly and is always changing. So that’s a great point, Daniel.
[08:59] Mike Sibley: I mean it’s that kind of speaks to you when we talked earlier about, you know, being on part of digital and making things that maybe parts of our job that we don’t like as much better or being able to get data in a better manner, more efficient manner that then we could be useful with.
[09:16] Mike Sibley: I’m assuming maybe a pro of AI in your world is probably the same that hey this is yet another tool to help us accomplish that or maybe shed some light there.
[09:25] Daniel Shorstein: Yeah absolutely. I mean, so you hit on something that a lot of the clients that I work with really struggle with, which is just getting, you know, an understanding of insights from their data. Often times the data is in different formats. It sometimes is messy.
[09:38] Daniel Shorstein: Sometimes, you know, it’s not accessible right away and automation of that process is critical and standardization of the data is critical to even getting insights. But even taking standard, you know, automation and standard methods of pulling that together, you can run into challenges where the data just might not have everything that you need and that’s where, you know, generative AI can really play a role in helping to standardize and clean up data. So for example, imagine, you know, you’re trying to get insights from data where, you know, the description field of payments may be, you know, pretty limited or a bunch of acronyms are put in.
[10:13] Daniel Shorstein: Generative AI can actually play a role there in looking at the the combination of like the vendor as well as that messy description field and it can infer similar to a human might be able to infer what that actually is for.
[10:35] Daniel Shorstein: And you can go through a process of really cleaning up your data and then getting some real true insights or pulling it up at a high level for management to be able to get some insights to be able to make decisions or to get an understanding of how their spending is in a categorical way.
[10:54] Daniel Shorstein: Something that you couldn’t get otherwise because, you know, traditional automation methods can’t really do that. So that’s just one of many examples where generative AI can really, you know, play a big role in helping, you know, clean up data, standardize things, and getting insights into your data.
[11:10] Mike Sibley: Yeah. Well, that’s great. And Thomas, I know being in the technology realm, the technology field.
[11:13] Mike Sibley: I mean I feel like I can use technology pretty well but doing what you do and what your team does and being able to understand that at an even higher level and how does that, I mean AI, I can only imagine has its own complications, its challenges, it’s, you know, obviously a lot of great things but also a lot of new problems that maybe it creates in your realm.
[11:41] Mike Sibley: What are some of those when it comes to the IT and technology and infrastructure there?
[11:47] Tomas Sjostrom: Yeah, I think I’m very excited about what’s happening and has been happening for the past year within AI and it seems that over the past, I don’t know Daniel, the past just the past 3 months or something, the spread of using AI agents has just exploded and the solutions people are building are incredible.
[12:02] Tomas Sjostrom: So there’s all this excitement about this new technology and sadly often internal IT departments are the last ones to adopt new technology, you know, tend to be conservative people that think of security and stability first, which is good and we want them to do that, but they need to embrace and understand this new technology because they need to be there to guide the business on how to implement AI and generative AI and agents in a secure and efficient and reliable manner.
[12:35] Tomas Sjostrom: It is so easy as always with new technology for people to get a free account somewhere and they can take all their data and just upload it to some chatbot somewhere and they get fantastic results. I got an account with agent.ai the other week and it’s all free and I can build agents and do all of this stuff having it answer emails for me and things like that.
[12:59] Tomas Sjostrom: So all of that opportunity is fantastic, but you also have to balance it and think of what is happening with my data. Where is this data being stored? What kind of privacy policies am I signing when I sign up with these companies? Just so you know that you retain the rights to your own data, just so you know that no one can jump the boundaries and get access to your data even though they shouldn’t have.
[13:26] Tomas Sjostrom: So for me, number one, I think IT and technology departments, they need to, you know, get with the program and adopt and understand this new technology and then on the other hand, business also needs to be a little more risk conscious when they test and implement these solutions so we don’t lose the integrity of sensitive information and things like that.
[13:52] Daniel Shorstein: And if I can maybe double click onto that, Thomas, I think similar to the IT department and thinking about those specifics around, you know, understanding what the policies and procedures and governance is with some of the tools that they’re using. I think it’s also important for companies themselves to think about putting in place the right level of governance and policies and procedures for how their employees and leaders should be using different AI technologies.
[14:21] Daniel Shorstein: Similar to the other policies and procedures they may already have in place for traditional technology use because let’s face it, most people, most companies probably have at least one person already using these tools, you know, whether you like it or not, whether you tell them to use it or not.
[14:40] Daniel Shorstein: So I always recommend just getting in front of it and putting those policies in place, getting some trainings in place, even at a basic level, just so people know what they can and can’t use these tools for, which tools they can and can’t use and then it can always expand beyond that to get more explicit once they’ve done more due diligence and more research to the exact points that you made Thomas.
[15:00] Mike Sibley: Good, good point. Let’s dig on that a little bit more, Thomas, if we could because, you know, it seems like every other month there’s a new AI system or maybe every month there’s a new AI and it’s the greatest thing. And, you know, we know some of the big ones that are out there, of course, you’ve got ChatGPT and some of the others. And I think one of the things that obviously one of the ones that’s out there is like Copilot.
[15:27] Mike Sibley: You know, Copilot tends to come along and I’m not here to endorse any sort, but just kind of seeing what people have access to the most. That seems to be one. What do you, Thomas, what do you see from a risk standpoint? If say, you know, people are using a Copilot and then, hey, I’m going to go try out this one now.
[15:46] Mike Sibley: How should they be looking at it from a risk stand? I know you mentioned uploading data but and there’s confidentiality there but what kind of steps can they take like a Copilot which is more kind of integrated in versus some of these other ones and, you know, how would you talk to a client about kind of evaluating those different systems?
[16:07] Tomas Sjostrom: Yeah, I think from the easiest or simplest standpoint, number one, know what your information assets are and know where your sensitive information resides and then to Daniel’s point, I think having guidelines and policies around how people use AI in connection with your information or your data, I think that’s an incredibly important start.
[16:23] Tomas Sjostrom: And I saw some research from Dun and Bradstreet last week that indicated that 40% of the Fortune 500s had already either implemented or started to implement AI. So like we are, it’s time to get with the program.
[16:52] Tomas Sjostrom: Like I said before, one aspect that I’ve been thinking about a lot about lately that I think is concerning is that when you implement an AI solution or chatbot or something like that, you will, you want to connect it to as many internal data sources as you can because that’s usually where you get the strength from the solution. And all those different databases or whatever it is, they have some kind of access rights, data privilege system, something like that in place already. And what a lot of people haven’t thought about is when you pull that data and make it accessible to a chatbot, you bypass all of those data privileges and access rights.
[17:26] Tomas Sjostrom: And so I think that’s one area where I think it’s important to be a little bit extra cautious and just make sure that, you know, if you work for a health care provider and you have, you know, privileged information about patients somewhere and there’s a limited group of people that should have access to that, don’t connect that just openly to an AI so anybody can search for that information. That would be a violation of HIPAA and you don’t want to go there.
[18:04] Tomas Sjostrom: And I think the same applies to manufacturing as well. You might have very sensitive drawings or, you know, information. Make sure you retain the privileges that you have in your old systems even in the new.
[18:18] Daniel Shorstein: I think to that end though I wouldn’t want to necessarily, you know, have everyone fear the use of AI completely and just avoid using it.
[18:30] Daniel Shorstein: So I think one thing that may be important is it kind of back to a point you made earlier Thomas is getting an understanding of how that third party can manage the privacy and security of your data including how they can manage and retain those access restrictions. For example, I would imagine and hope, but I haven’t verified that a company like Microsoft who’s incorporated generative AI through Copilot into the Microsoft suite, they probably have done a pretty diligent job of making sure that when you’ve linked up data from your existing Microsoft Sharepoint and One Drive.
[19:01] Daniel Shorstein: That they’re going to retain the restrictions that are already in place. So if I upload a file, if I connect my, you know, generative AI through Microsoft through Copilot to certain files that I have access to but Thomas doesn’t have access to, I don’t think Thomas is all of a sudden going to be able to access them through his Copilot because Microsoft has done a good job.
[19:24] Daniel Shorstein: But if you start using a third party app that the entire business has access to, you shouldn’t automatically assume that they’re going to treat it the same way as Microsoft. So, it’s about understanding, reading the documents, and maybe even having a conversation with their IT folks.
[19:40] Mike Sibley: So, Daniel, let’s kind of build on that’s a good point. I think that’s the great dynamic of, you know, Thomas is trying to make sure the organization stays safe, but is using tools that are appropriate and, you know, not that you’re not advocating, but you’re saying, “Hey, but it’s okay to still do this stuff because there’s ways to accomplish both.” But let’s dig into that a little bit more in terms of okay let’s talk about some applications, some things that are out there that you think from a manufacturer, I know, you know, we talked about Copilot, I know I’ve created in a secure environment some GPTs that allow me to analyze the same data every single month. But what else do you kind of see that’s or how do you see or what applications do you see that might be promising? I know you can’t get into a million things here but you know maybe a couple of highlights that you think is useful in a manufacturing environment.
[20:32] Daniel Shorstein: Yeah. I mean, one of the ones that comes to mind is, you know, Thomas mentioned there’s a lot of new advancements in the agents aspects of generative AI and a new model that OpenAI just released over the past week is their full O3 model, which is a reasoning model, which basically means it will not just provide you with the answer right away. It will actually plan and think ahead.
[20:53] Daniel Shorstein: And the other thing that’s super impressive about it is it has what’s called tools available to it. So it can pull up a tool such as web search or Python to run code and do calculations. So you could use a model like the O3 model in a manufacturing setting to do some research on cost comparisons.
[21:14] Daniel Shorstein: So let’s say you have a whole list of parts that you are looking at, you know, potentially swapping out to lower the cost of manufacturing an item and your traditional method is having to have, you know, somebody go out and do the research and compare the costs and look at, you know, well, if I buy above this threshold, does it lower it? And how does that actually impact my end product? You might not be able to automate that full process now, but you can do a whole ton of that process now, handing it off to AI.
[21:39] Daniel Shorstein: So you may need to be explicit in how you instructed to do so. But I would, I mean if you have an interest and you have access to ChatGPT, I encourage you to just try it, see what it can do because it’s incredible what it can do. But I think that’s a good use case that comes to mind.
[21:57] Mike Sibley: The only thing I always caution and I always do double checks but you got to kind of look at the output and say okay does this make sense? Does, you know, in other words, is it, you know, I don’t know that it’s at a point of perfection, right, where you can depend 100% on everything and not have to double check anything.
[22:18] Daniel Shorstein: Well, to that point, one of the things I like to say to people who, and there’s people on both sides. There’s people who, like my wife actually, won’t use generative AI at all because she says, “Well, it doesn’t get everything right all the time, so how can I trust it?” And then other people who might just be like super enthusiastic and like, “I’m going to run my life with AI. I’m going to give every decision to it.”
[22:35] Daniel Shorstein: Well, there’s probably a middle ground, but the way I like to think about using generative AI is treat it like you might treat a very book smart but maybe not, you know, highly experienced professional. So, maybe an analyst level, maybe a manager level depending on what it can do, but treat it like you would a human who has who you’ve not worked with before, right? You’re not just going to take their output and trust it immediately. You might trust but verify.
[22:58] Daniel Shorstein: So, take the output. Let’s say you try the experiment we’re talking about here of having to do some research on comparing costs for parts or materials. Come, you know, give it a small task, come back, review it, see how well it did, see where it made those mistakes, and then just kind of remember that because it’s going to be fairly consistent in what it’s good and what it’s not good at.
[23:21] Daniel Shorstein: But as long as you always treat it like you might treat a human who isn’t perfect but really good at what they do, you’ll at least keep that human in the loop, that review in place, which is critical, like you said, Mike, it’s not always going to be perfect. So, you should make sure you’re still looking at the outputs.
[23:39] Mike Sibley: Well, and Julie, I want to go to here for in a second, but the one thing whether you’re talking to a human or AI, if you ask the question wrong, you’re probably going to get an answer that you’re not wanting to get, right?
[23:48] Julie Kniseley: No, definitely.
[23:50] Mike Sibley: So Julie, in terms of the HR functions, how is AI, how do you see AI either now or kind of as it’s coming impacting, being helpful, improving, you know, adding complexities to the HR function?
[24:15] Julie Kniseley: Yeah. All of the above, you know, I mean the amount of things that it can do from a transactional standpoint is incredible. You know, I mean, you know, it’s no secret. HR is very, you know, document driven, heavy duty paperwork. There’s like a million forms for this and that. So, it can really automate a lot of those processes, which is fantastic. Saves a lot of time, does better in a lot of cases with the human checks and balances to make sure things are done properly and when they’re supposed to be. Reviewing resumes, reviewing applications for jobs.
[24:38] Julie Kniseley: It can do predictive analytics on which people might be the most successful. Going back to kind of your point though, it’s not perfect and now some of the concerns now at the EEOC and other federal organizations is making sure that it’s not biased. So you use an example, say somebody builds out an application and it’s completely automated. They have an initial screening with a bot, with a chatbot, instead of talking to a human.
[25:10] Julie Kniseley: Well, if that person has certain types of disabilities, maybe they’re not responding the right way or they’re not using the keywords the right way. Applicant tracking systems now, AI will review every single resume. Wonderful. Saves a lot of time. But now there’s something called white fonting. Anybody heard of that? White fonting.
[25:29] Kevin Golden: No, you somebody will take their resume.
[25:30] Julie Kniseley: And they will add keywords to match the job on the resume using a white font so that it gets past the applicant tracking system, gets put to the top of the pile, and it’s not actually visible on the resume. So, for every benefit, there’s somebody or something that can happen that can make it not quite as good.
[25:53] Julie Kniseley: So, Daniel, to your point, you got to have a human involved at some point because again, it’s so new and it’s ever evolving. We need to know from an HR perspective where are the issues that can have unconscious bias or conscious bias within these systems that it’s going to discriminate in some fashion because again the AI was, it’s built by humans.
[26:13] Julie Kniseley: So I think that’s really, right now HR is kind of looking at all these things and looking at all these benefits but also going okay so where do we need to know to watch out for and one of the biggest challenges from an HR perspective in a lot of cases it’ll take years for the regulatory environment or the litigious environment to catch up to give us better direction on what we’re supposed to do.
[26:38] Julie Kniseley: So, for now, kind of at the beginning of the conversations, policies and procedures, how are you going to use it? What is permitted or not? Can you block it so that employees can’t access certain AI systems while they’re using, you know, your server or something like that? And again, I’m not the techie in this group, gosh knows. But, you know, bad things can happen if somebody is motivated to do that kind of thing.
[27:01] Julie Kniseley: And again, it’s not scary, but you need to put some guardrails on all of this, knowing that those guardrails are going to shift and change over time as the AI gets more mature. Is that the right way to say that?
[27:16] Mike Sibley: Well, yeah.
[27:19] Mike Sibley: And I, you know, it’s interesting because, you know, it’s one thing when you’re dealing with calculations that you can kind of verify, but, you know, you’re kind of bringing up, well, we’re dealing with human beings here, right? We got human lives that and what’s really funny when you bring up the bias piece of it is for however long and Julie you’ve gone through and you’ve trained us even on how to interview people and right the human bi, like our own bias you, you worry, you teach people not to be in your interviewing process. So now you got the same concern AI so kind of Daniel’s, Daniel’s thought of going back to hey just pretend this is a h, so you got like this human component that almost seems like it comes up as part of this because it’s the same issues almost.
[27:54] Julie Kniseley: Yeah, for sure.
[27:55] Mike Sibley: So, well, and it goes back, it goes back really quick. It goes back to what you were saying earlier, Julie, about AI is not here to replace you. Be a tool alongside you.
[28:13] Mike Sibley: The part you don’t like, maybe going through all those resumes and taking forever that it takes to read through those, quickly synthesize those. But yet, you just pointed out some very good, some ones I wasn’t aware of, loopholes, if you will, that may exist or things to look out for, kind of warning flags to look out for that inherently exist in it because again, it is created by humans. It’ll get better over time.
[28:31] Mike Sibley: But it still exists. So it kind of instead of one replacing the other, it continues to push the fact that they’re working alongside each other to help go about what you said Mike, throughput, do a lot more with a lot less in a short amount of time to make good decisions and provide value in those decisions.
[28:49] Julie Kniseley: Yes.
[28:49] Kevin Golden: Yep.
[28:49] Mike Sibley: Yeah. One thing I want to I guess bring up is, you know, we’ve talked about several use cases. We’ve talked about a lot of the risks that can come with, you know, using generative AI, but, you know, I like to think about when you’re looking at where can you actually add value today.
[29:08] Daniel Shorstein: So, you know, for like, you know, our listeners, our viewers, you know, you may be thinking, well, I don’t know if I want to get into it because of all these risks and all of the regulations. Well, so what I like to recommend is, you know, you think about it from a risk based approach, right? But you also look at return on investment just like you would any other project, any other automation opportunity.
[29:31] Daniel Shorstein: And you really just look at, okay, for the things that take me or take my people a lot of time, take a lot of energy that we don’t like doing. Also look at if I brought a generative AI tool into the mix, how can I identify the ones or how can I put a human review in place that mitigates that risk the most? Right? So, you might not want to look at automating the selection of résumés today because as Julie mentioned, that could introduce some real risk there.
[29:59] Daniel Shorstein: But you may want to use it for, let’s say, flagging potential transactions to review as part of internal control program. You wouldn’t replace any existing internal control, but you could add to your current internal controls by flagging anything that the AI pops up and says, “Hey, I noticed this looks different or this looks funny, you may want to look at it.” After it flags it, someone’s going to review it and make a decision based off of it, and that’s going to be a human. There’s really a very minimal, if any, risk that that introduces. All it introduces is an additional set of eyes on something.
[30:36] Daniel Shorstein: So, that’s an example of where you could use generative AI today and it really doesn’t introduce a significant level of risk, but could add a lot of value.
[30:43] Mike Sibley: There’s a planning session to this and Kevin, I think I might be jumping. I know you have question around this but it’s like I’m thinking about this one thing and Thomas, you know, there’s a planning aspect to this and I think back to like what you were talking about with data and, you know, something like a Copilot or something like that, part of the power of that is having your data set up in a right way so it can access all the things that it can. So I don’t know many times you probably spent three days talking about this but, you know, and there’s a planning aspect to say if I’m going to incorporate some of this. I got to really look at some of my infrastructure, IT infrastructure, data infrastructure to maximize the use out of this because I’ve heard way too many people say, “Yeah, doesn’t seem to do anything, you know, and maybe they’re not set up in a right way.”
[31:32] Mike Sibley: And maybe you can hit on that, Thomas, a little bit.
[31:35] Tomas Sjostrom: Yeah. And part of that is what has gotten me so excited over the past couple of months because I feel like we’re quickly moving beyond using AI to create cool action figures in a box or whatever to actually make internal processes more efficient.
[31:57] Tomas Sjostrom: And like Daniel and I were talking, we were in a meeting earlier today and we were just bouncing off ideas and went down the path of actually discussing what AI could, what solution could have an efficiency impact on my organization servicing, you know, lots of clients with their IT infrastructure and getting, you know, requests for help and troubleshooting and things like that.
[32:23] Tomas Sjostrom: And then I think we actually are on to something quite interesting with the proof of concept that we’re going to do and for me the hurdle there was to have someone with me that understands what we can do with AI in a secure manner with the data we had. And Danny you even brought up the fact that if we have sensitive data the AI agent can actually scrub that.
[32:50] Tomas Sjostrom: So again, taking even more risk out of the picture because what we’re looking for is just to enable our engineers to more quickly get to the point of troubleshooting things. We don’t really care what individual or whatever data was there. We want to know what the problem was. So again, I what I see is that could help a lot of organizations is to brainstorm around what you can do in your business with AI and maybe then bring in an expert such as Daniel, someone that knows what you technically can do with the tools to kind of facilitate that discussion so you don’t get stuck creating action figures or whatever.
[33:28] Mike Sibley: Well, and I mean I think it brings up a good point about that we mentioned earlier that, you know, you may whether you’re on the spectrum of I’ve just dabbled, looked at don’t even think I’m using it or hardly using it in my organization in my life to people who are gung-ho trying out every new bot or tool that’s out there, you know, as it comes out right. I guess what are, you know, in your respective areas and maybe we’ll go right back to you Thomas, what are the, you know, what I mean we talked about some of this of kind of things to be aware of or things to maybe prepare yourself ahead of time. I would imagine most of this isn’t as simple as just like many great tools not just a plug and play.
[34:02] Mike Sibley: But what are some things maybe they should consider or other things that maybe we haven’t discussed that can kind of allow, lay the foundation for utilizing AI.
[34:28] Tomas Sjostrom: Yeah, I think we mentioned a lot relating to, you know, understanding your data, your information, the privileges with that and protecting that. I mentioned to be aware of the privacy policies of the tools that you implement, using policies, guidelines and things like that.
[34:48] Tomas Sjostrom: One area that I think should be mentioned where cyber criminals are using AI tools is to create even better phishing campaigns. I mean, we’ve seen just over the past month an increase in business email compromise just because they’re getting better and better and better at, you know, creating fake emails that look real, fake websites that look real. And they managed to get people to click on those links and download malware and things like that.
[35:14] Tomas Sjostrom: And they’re also using AI tools to build malware faster and more efficiently. And then you also have, you know, the existence of deep fakes, voice and videos that make it sound like your big boss is calling and wanting you to transfer $200,000 to this other account. And you’re like, well, it sounded like Mike, so I better do it, you know, and just like in the old days, always call back.
[35:39] Tomas Sjostrom: You have the number to this person. Tell them, I will do that. Let me just call you back. And you can call them back on the number you know they have. So I think sadly we’re going to have to be even more observant of those things as this technology also gets implemented by the cyber criminals.
[36:02] Mike Sibley: Yeah, that may be the scariest thing I’ve heard all day because, you know, obviously, you know, I’ve had clients now who have gone through and have actually suffered from some of those, you know, fake emails or whatever. Now you’ve got, now I didn’t even think about that.
[36:21] Mike Sibley: Now you got voices and video and everything else where, you know, the boss can call up and say, “Hey, move this money around.” Gosh, that’s one of, so it’s happening now.
[36:30] Tomas Sjostrom: I was giving a presentation on this subject last week, I think it was, in Daytona and one of the attendees worked for a bank and during the meeting he got an email from their management warning them about deep fakes and these attempts to make them deposit money into bad accounts and whatnot. So that’s a real threat.
[36:52] Tomas Sjostrom: And again, the solution is easy. You know, call back, verify, have two people verify everything. So, it’s not rocket science to defend yourself against it, but it is a scary concept.
[37:02] Mike Sibley: Yeah.
[37:09] Mike Sibley: Well, and Julie, maybe moving to you. You know what, I think you mentioned earlier, you know, getting people involved, communicating a lot of that good stuff. What else maybe can people do in the organization in their company to overcome maybe some of the natural cultural organizational barriers to maybe adopt AI and be open to it?
[37:28] Mike Sibley: Like you said at the beginning of this podcast, you know, it’s coming whether you’re ready or not, right? It’s here. It’s coming and it will continue to evolve and continue to grow and be more and more mainstream.
[37:37] Mike Sibley: So maybe what are, are there any, you know, groundwork we can lay there or an organization can lay to help overcome some of those natural barriers to adopting AI?
[37:53] Julie Kniseley: Absolutely. I was just going to say with what Thomas was saying, training, training, training. You can’t train your employees enough on cyber security. You can’t train your employees enough on how to prevent some of the bad stuff from happening. But you also can help employees who might not have that digital literacy get comfortable in an AI environment. They might not have ever touched it or they think it helps write an email.
[38:13] Julie Kniseley: So training, training, training, and whether it’s upskilling for their job that’s going to change and require a certain amount of digital literacy or reskilling, which is maybe that job they’re doing is going to be handled by automation. So you can reskill them to something else because again there’s not like millions of people waiting for jobs right now. There aren’t.
[38:31] Julie Kniseley: So you need to hold on to the workforce and the good workers you have and the best way to do that with a changing environment is make sure that they’re prepared. So training is going to become incredibly important and sooner than later in order to establish that trust and that comfort level with the new technology.
[38:51] Julie Kniseley: I mean I when I started out there were no computers, right? Because I’m old and you have to learn to adapt and it’s going to take some gen in a multigenerational workforce. You know, a 22-year-old kid might be absolutely fine with changing technology because they’ve been playing computer games since they were 13 versus somebody that’s kind of on the older end that maybe just had the experience or the need to.
[39:14] Julie Kniseley: So, it’s making sure that everybody no matter the generation and what their skill level is and what level they’re at, making sure they’re prepared and trained so that they are comfortable using the tool and using it in a very safe way.
[39:30] Daniel Shorstein: And I’ll add on to that, one of the things I think that’s right in line with that as what Julie was saying is leadership. The tone of the top, setting an expectation and encouragement of innovating and experimenting. So doing it in a safe way obviously but telling the organization like hey this technology is coming, we’re working on figuring out how to incorporate it safely but go ahead and try using these tools for yourself. It’s really helpful even using it for personal purposes.
[39:55] Daniel Shorstein: Because there’s really no guide book out there for how to write a prompt. I mean there are but they’re not, you know, the best way to get good at using generative AI is to just use it. You know, if you spend a few hours a day on it, even an hour a day on it, on anything, you’re going to quickly get comfortable with what it’s good at, what it’s not good at, and that’s really kind of the best training you can get and getting familiar with it and upskilling.
[40:19] Tomas Sjostrom: Well, Daniel, I just wanted to really stress what what Daniel was saying there because I agree with it so much. I firmly believe that the best approach an IT leader or a business leader for that matter can have towards AI is to enable the usage of that and encourage the usage of it but lead it so it’s becoming a secure and efficient implementation.
[40:46] Tomas Sjostrom: If you can do that, then you bypass a lot of the risk, but you still enable the implementation and the usage of this technology because to your point, Daniel, this is happening. We’re way past the curve of initial implementation. So, you just have to embrace it and enable it.
[41:04] Mike Sibley: That’s a great point. I think, you know, the funny thing, Daniel, is that, you know, hey, I do this all the time. I ask how to do something like, hey, this is what I’m trying to do. Can you show me? Then we kind of go through some iteration. Next thing you know, I’ve created a GPT that helps me do the same thing over and over again.
[41:24] Mike Sibley: It’ll train you on how to use it in, you know, some ways. Am I saying that kind of 100%.
[41:30] Daniel Shorstein: Like what one of my team members will come to me because she knows I’m like a GenAI guru, but, you know, she’ll say, “Hey, Daniel, help me figure out how to do this.” And I’ll say, “You know what? The expert is the AI itself. Like, let’s do it together.” So, I’ll show her how you can just pop it open and say, “Hey, I’m trying to do this with generative AI. Can you think of different ways that I can do it?” It feels kind of meta, but it works so well. And that’s, it’s kind of fun the first time you actually do that. You’re like, “Oh, this is great. I now have this like helper who won’t call me stupid for asking stupid questions. It’ll actually just kindly and gently help me.”
[41:59] Mike Sibley: Well, you know, who knows how these things will evolve. Maybe eventually it’ll call you stupid. I don’t know. But, you know, and I’ve heard there’s, you know, some of these, some of these AIs where you can just have like chat conversations with just for the how’s the weather going type of thing. I don’t know. So, it’s really interesting what’s out there.
[42:16] Daniel Shorstein: Yeah, for sure.
[42:18] Tomas Sjostrom: A couple of weeks ago, an AI tool be it wanted to discontinue the conversation. He did, told me want to discontinue this conversation. I’m like, oh, you got shut down. I didn’t think I was acting, I didn’t think I asked anything that was that bad, but okay. Thanks.
[42:37] Mike Sibley: You know, so I, you, we’re kind of getting close to the end, but I just want to kind of hit on maybe some final points from from each of you as we kind of go into it. So, you know, maybe Julie just maybe reiterating kind of the high points here, but I guess kind of overcoming the cultural barriers, adopting from the people perspective, kind of your top couple points on on what a business should do as they go down this road.
[43:09] Julie Kniseley: That’s the workforce. First, you have to figure out where your gaps are. If you’re going to start implementing AI, you need to know where they are so that you can level set where you need to take them. You know, make sure that your plan is flexible because it’s going to change.
[43:26] Julie Kniseley: You know, as Thomas, you’ve said a couple of times, two months ago, you were having a conversation and three months ago it was completely different. So, it’s growing at such a fast pace. I think that we have to make sure that we’re very adaptable to any solutions that we try to use and make sure the employees are comfortable with that. You know, but look where, look where your data literacy is. Look where your knowledge gaps are. Put a plan together on how to communicate with your workforce and making sure that they’re ready for this and get their participation. They know how they need to learn. And you have to have more than one method. You know, some people want to watch a video. Some people need classroom training.
[44:02] Julie Kniseley: Some people just says, “Leave me alone and I’ll ask AI how to do it. I don’t,” you know. So, you have to kind of be very adaptable and be willing to try different things for different people depending on what their learning style and how they communicate and all of those things. So, it’s inevitable.
[44:20] Julie Kniseley: I know a lot of companies haven’t even touched it yet because of the fear factor, but I think that in another six months, you know, everybody’s going to have to get on board in one way or the other. So, just don’t forget that there’s humans behind it and remember to treat them that way and don’t just communicate by email, please.
[44:43] Mike Sibley: So, you know, we’ll shift over to Daniel here for a second. Daniel and I are working on with a couple different client opportunities with enhancing their use of AI, data automation, bringing all this information together. So, it’s really cool to see. But from your perspective, I guess reiterating some of the things that you see to overcome, to get involved, to really thinking about how to use it in their specific business. What are your thoughts there?
[45:10] Daniel Shorstein: Yeah, so thinking about generative AI from a manufacturing specific context, I see, you know, maybe like kind of three areas that it really can have an impact. One is around process optimization, automation. So using it to one help with the back office processes, automating, you know, a lot of the things that you might do creating reports, analyzing data but also in the operations side.
[45:29] Daniel Shorstein: So reducing, you know, understanding where the slowdowns are in manufacturing process itself, taking the data from the system and using GenAI to help you understand where you can improve those processes and, you know, improve kind of your cost efficiency, enabling that lean manufacturing. The second area is actually enhancing product development.
[45:50] Daniel Shorstein: So when you’re in that product development phase, doing research and understanding, you know, customer research and having some of that innovation process be faster. It can really speed up your time to market which can have a huge impact on the business obviously.
[46:06] Daniel Shorstein: And you can also use it to help differentiate your product from competition. Doing that research for you, helping you understand that balance of what are the actual aspects I need to add in that will differentiate us. And then the third area is really just around that like sales, marketing, customer service.
[46:26] Daniel Shorstein: Just improving the process of, you know, providing good quality customer service, analyzing some of the feedback you’re getting, helping you with your marketing campaigns and just helping you with improving your sales process using it strategically and also using it to create some of the materials that you might use.
[46:47] Mike Sibley: Thomas, not last but definitely not least, the guy who scares us the most on times with all the things that can go wrong, but all the things that can be done to make sure that’s secure. What are your thoughts?
[46:54] Tomas Sjostrom: Yeah, I’m back to what I said a while ago. I think number one is you have to embrace and encourage this new technology. That’s the best way to get control of it, if you will. And as you do that, make sure you have guidelines and procedures and policies in place to help people do the right thing.
[47:18] Tomas Sjostrom: And from that comes to make sure that you have your data in the right place. You know what kind of information is being shared where and as I said initially go through the privacy policies of the tools you’re using. Another thing that I want to mention is more and more providers of software are incorporating AI into their solutions.
[47:41] Tomas Sjostrom: So maybe review that and review what happens with your data in their solution. Because that could be an area that people might not think of that. And then I think in the end be a bit aware of, you know, the rapid technology development in this space and how that will drive adoption because some of the examples everybody has mentioned, it is mind blowing what people are doing with AI and AI agents right now. So, I think you just kind of have to embrace it, but make sure you do it in a secure fashion.
[48:22] Mike Sibley: Great. Well, thank you guys so much. You know, I’m just thinking back, you know, we talked about all the great things that AI can do for you. And of course, there’s the other side of it, the things that people are using AI to be destructive as well. So AI can definitely enhance the value of your business. Can optimize, can improve, but also the training aspect because the worst way, one way to kill the value of your business is how to have a data breach, have an employee do something that, click on something or transfer money or do something that they shouldn’t do. So there’s a lot of training aspect. There’s a lot of information aspect, but you know what comes out of this is we shouldn’t fear it.
[48:58] Mike Sibley: So again, Julie, Thomas, Daniel, thank you guys so much. These three they lead our James Moore advisory services, they are experts in their field so for all of our listeners out there I’m sure there’s tons of, you know, questions, anxiety you may and probably hope that, you know, they can, people, companies can use these tools. So thank you guys again for being on the show today was a pleasure.
[49:24] All: Thank you.
[49:26] Mike Sibley: All right for everybody else thank you for listening as always if you have questions thoughts comments concerns Feel free to reach out to Kevin or I and certainly Thomas, Julie or Daniel as well on questions or other thoughts or concerns you may have about your business and how we can help. And I hope with that everybody has a great day.
[49:47] Narrator: To learn more about James Moore and Company’s manufacturing services, go to jmco.com. And don’t forget to subscribe to our Moore on Manufacturing series to receive updates when new videos and podcasts are released.
[50:03] Narrator: If you’d like to be a guest or if there’s a topic you’d like to see covered on a future episode, contact us on our website. You can also follow us on social media for more news as the landscape on manufacturing continues to rapidly evolve.
Ready to explore how AI can benefit your manufacturing operations? Watch the full episode and discover practical strategies for implementing AI safely and strategically. Contact Julie Kniseley, Tomas Sjostrom and Daniel Shorstein at James Moore & Company to discuss your specific needs.
Other Posts You Might Like