Career Compass

The Intersection of AI and IE&D with Dr. Alex Alonso

Episode Summary

Navigated correctly, generative AI has the potential to universally level the playing field in the world of work by providing everyone the ability to access the entirety of human knowledge through the use of prompts and queries. In this episode of Career Compass, hosts Demetrius Norman and Aly Sharp are joined by SHRM Chief of Data and Insights, Dr. Alex Alonso, to discuss the implications and opportunities of generative AI as they pertain to students and emerging professionals, and share his actionable advice on how to cultivate AI habits that equip them with a competitive edge in today’s employment landscape.

Episode Notes

Navigated correctly, generative AI has the potential to universally level the playing field in the world of work by providing everyone the ability to access the entirety of human knowledge through the use of prompts and queries. In this episode of Career Compass, hosts Demetrius Norman and Aly Sharp are joined by SHRM Chief of Data and Insights, Dr. Alex Alonso, PhD, SHRM-SCP, to discuss the implications and opportunities of generative AI as they pertain to students and emerging professionals, and share his actionable advice on how to cultivate AI habits that equip them with a competitive edge in today’s employment landscape.

Earn 0.5 SHRM PDC for listening to this podcast; all details provided in-episode.

Episode transcript

Rate and review Career Compass on Apple Podcasts, Spotify, or wherever you get podcasts.

Episode Transcription

Aly Sharp:

Welcome back to season eight of Career Compass, a podcast from SHRM and the SHRM Foundation. Career Compass prepares the future leaders today for better workplaces tomorrow.

Demetrius Norman:

Thank you so much for joining us for this episode. My name is Demetrius Norman.

Aly Sharp:

And my name is Aly. This season recovering topics related to returning to office, mental health, and AI. For this episode, we're excited to talk about a very familiar topic, AI and IE&D.

Demetrius Norman:

This is a very special Career Compass podcast for me, and I will include Aly as well, as we both have the privilege of speaking with SHRM's own Dr. Alex Alonso, who serves as SHRM’s Chief of Data and Insights. Dr. Alonso is SHRM's Chief Data and Insights officer leading intelligence, insights, and innovation functions, as well as SHRM's latest acquisitions, the CEO Academy and Linkage. As leader of SHRM's research and insights business units, his total career portfolio has been based upon practical thought leadership designed to make better workplaces and to grow revenue across industry. Dr. Alonso was recognized as an inaugural member of the Blue Ribbon Commission on Racial Equity in the workplace, a coalition created to foster equitable and inclusive cultures. His research has been featured in numerous outlets including USA Today, NBC News, BBC, CNN and more. He has served as a member of several speakers bureaus with more than 400 speaking engagements over the last decade. HR Magazine calls him one of the most effective communicators of data in recent memory.

Dr. Alex Alonso:

Thank you all for having me. I really appreciate the opportunity to be here.

Demetrius Norman:

And so with that, Alex, again, I just want to extend a warm welcome, and I so appreciate you taking the time to talk to us, so we are get into the discussion. If you are just tuning into Career Compass, we kicked off season eight, as Aly mentioned, addressing burnout, followed by creating your personal brand, and then the imposter syndrome, and now AI and IE&D. So Alex, I saw your quote from SHRM's AI in the Workplace handbook that says, "AI is going to be a being one day, and it's much sooner than we think." You go on to say that, "By 2030, AI will have copyrights and will get credit for everything it does. That said, we need to adjust our mindsets now to look on AI not just as a tool, but as an active participant in the workplace, a colleague, a partner. AI is a part of the team." So I want to take us to the basics, what is AI and how do you see it impacting HR professionals?

Dr. Alex Alonso:

If you were to think about AI in particular, what it is nothing more than an algorithm or a machine learning kind of development or program that allows us to do one of four things. It allows us to optimize work processes, it allows us to enhance work processes, it allows us to analyze work in general, and more importantly, a variety of different problems, and it allows us to engage in generation now, meaning that it allows us to create quicker, faster, better. If you were to look specifically at what we're seeing with AI is there's a lot of people who engage in what I call future mongering. What I mean by future mongering is that they get nervous about, "I'm going to be displaced. I'm going to be taken over by robots." Everybody runs around crazy every once in a while when they start thinking about, "Oh my gosh, I'm going to go ahead and lose my job, and machines are going to take over the world. And before you know it, we're going to have Skynet," for those of you that ever watched Terminator.

What's intriguing about it is that's not exactly what's happening. What we've had recently is a singularity event, is what they call it, where the world has now reached the point where we have large language models at a way that is so efficient that it allows us to do things like using GPTs to create, to engage, to generate all kinds of work product, all kinds of different things including art, all kinds of new and novel content as a way of looking at it. That is the new thing that people are referring to when they're talking about AI, it's not the old things that used to be like, how can I automate? How can I analyze? How can I predict better? But those are all part of artificial intelligence. In fact, if you were to really look at this, what's intriguing about this is, if you were to go back 65 years from when the term artificial intelligence was actually coined, and it was coined at a computing conference or a meeting at an old northeastern university, and the basic principle was should we call this artificial intelligence or should we call it augmented intelligence?

The ironic thing is they went with artificial, but in reality it's augmented intelligence. We're still relatively far off from this notion of reaching what we call artificial general intelligence, which is when we'll actually have AI the way we all fear it to some degree.

Demetrius Norman:

I'll follow up. So specifically in thinking about that and thinking about just the historical aspect of how AI came to be and just how it continues to progress, how do you see it impacting emerging professionals? We have students who are preparing for graduation in May, June, about to enter the workforce, so what are some things that you could share that they may be looking into now and heading into the future?

Dr. Alex Alonso:

Yeah, it's funny, I think emerging professionals actually have an advantage in many cases, in large part because using this notion of generative AI in particular or AI in general is actually still part of digital literacy. And so when you think about what it is that people are seeing, whether we like it or not, there is an advantage to people who have developed digital literacy, and younger professionals, emerging professionals, students have that process, that button that exists out there.

The other thing though that I think is an advantage is, when you think about how I prompt large language models, how I get it to do something different for me, there's a lot of advantage out there when it comes to having come from an educational setting or environment. If you've been in one, whether it was through a college, university, some other venue like a trade school, take your pick, you just have an advantage when it comes to actually engaging in random prompting the right way that you need to have happen. So there's an advantage there.

What I would ask, and what I think a lot of students would benefit from is trying to understand specifically what the next wave of artificial intelligence will be, and then looking for artificially intelligent workplaces. What I mean by that is two things. When thinking about an artificial intelligent workplace, what I'm suggesting is try to understand throughout your recruitment how well and what the culture is around AI at a potential employer. I say that in large part because there's a couple of questions that I would ask if I were looking for a new job and I were entering the workforce. First and foremost, what is your policy? It sounds silly, but what is your policy around using the different kinds of AI tools?

Second, what is your history? What have you done over the last five to six years, because it's been around for quite a while, what have you done that pushed you to do it? What is your strategy for how your organization is going to use it? And it sounds silly because most people going into their first jobs don't ask these questions, but I think it's important to get to the heart of the matter. And then to top it all off, I would ask more about what is it that you anticipate will make you a competitive employer moving forward? In other words, when it comes to how you're thinking about AI, what is going to keep me there, not just what's going to attract me, but what's going to keep me there?

Just to do a little shameless plug, I'd actually say anyone who goes to any employer or experiences anything involving this should go in and ask, "What is your plan for how you integrate human intelligence into that as well? Is your strategy to enhance or is your strategy to displace?" It's okay for somebody to say, "I don't know." I mean, I wouldn't want to go somewhere where displacement is the obvious only answer.

Aly Sharp:

Yeah, I like that last question you posed about being strategic or how they compare to other places, because I feel like that's just a good question in general to ask. I know having... I can't really say recently done a job search anymore, but I wouldn't have even known to ask about AI, and that was in 2021. I think that hopefully our research at SHRM is promoting those conversations for HR. And piggybacking on that and going into what people are saying about AI, what do you think are some common myths about AI?

Dr. Alex Alonso:

So when you look at the myths, I think there's a lot of thought specifically, and our research shows this, that the number one fear is you're going to be displaced. When you look at the research from working Americans, it basically doesn't bear out that way. There's only 21% or 22% of working Americans that believe that they will lose their jobs sometime to AI sometime within the next three to five years. There are only 9% to this point that have actually lost jobs. And even then it's not a full loss of jobs, it's actually a partial loss of some of the responsibilities you have.

In many cases though, they actually were positive about that loss of those things because it was the things that could be automated, it was the things that they didn't want to do, that didn't engage them. So we're seeing actually that when you look at this, and I am grateful to the SHRM research team because they're the ones that arm me with all this data, the first thing that we see is there's 57% of working Americans that actually are excited about the opportunity to use AI and are expecting their employers to give them the tools to use the AI. Not just the access to them, but also the training. That's a win in my book every day of the week.

Aly Sharp:

Yeah, I think the training is super critical. We have two episodes on AI this season, and that's what our last guest touched on as well, is you can have AI in the workplace, but you should make sure your workers know how to use it.

Demetrius Norman:

Yeah. With that, we actually created a playbook, we as in SHRM, which has helped to inform this conversation. And so there was something that caught my attention with regards to the risk of AI and some of those risks as it relates to maybe bias and diversity and inclusion. Can you talk a little bit about that research or the thought behind what some of the potential risks are and how maybe we can avoid them or address them?

Dr. Alex Alonso:

Well, so it's funny because I'm a big believer that risks are one thing, but sometimes they create a lot of opportunity too. And so it's always important to know how to balance those, whether you're employer, whether you're a student looking for a job, just graduating, and you're trying to think about how you go about that.

When you look at AI in particular and the way we look at AI today, believe it or not, there are a variety of categories of risks. One of the one that stands out is, because we're in the HR space, is the notion of bias, the notion of how various entities are using this concept of bias, and the black box nature of large language models or even other manual automated scoring techniques as well. What we're seeing is places like the City of New York and the mayor's office and the city council have set up a requirement that says, "Okay, you need to provide an indication of what the bias is in your algorithm. That means you have to share your algorithms with us."

Well, that has a limiting effect too because what it basically says to the developers and the providers of AI is that eventually they have to turn over their algorithms if they want to play in New York City, and when they turn over their algorithms, they're giving away their intellectual property. It can be [inaudible 00:34:51], it can be made public through public records, which also kills their competitiveness. In addition, one of the things that we're seeing is that as far as use in a workplace, the European Union is sort of typically the leading edge on a lot of these things, one of the things that we're seeing is they've put in place a variety of different approaches how you might do this, and what they're basically saying is, "You have to share this information with us. You have to demonstrate that you're doing a compliance assessment every year about how well you're complying in the use of these things to prevent bias, to prevent things like unnecessary harm or effects on your workforce in any way or potential candidates." But then on top of everything else, you want to test everything that you're doing, so they're asking all organizations to create a sandbox environment where they can test in a smaller, limited sample how these things work.

Now, that sounds good, it's actually a good measure in many cases, that's how development happens in the world of IT, but a lot of the issues with that are that you are adding significant development costs to a lot of providers, and so all of a sudden you're killing some of the ability for startups to actually compete in that space. So you're limiting economic opportunity. But there are a variety of things that we haven't even begun to consider as far as issues. Bias, these types of things, that's the one that we know really well, that's the one that's been sort of identified most clearly. What we haven't talked about is intellectual property rights. What happens when AI actually is used to create something? Does it get copyrights? Does it actually do this? There's 16 or 17 cases now headed to the US Supreme Court at some point for determination as to whether they will take it to change the US Patent and Trademark Office views on what makes a human being and what makes a being, so to speak, and can there be copyright or trademark for AI, for ChatGPT in particular. And those are hitting the streets here in the next 12 months, the next session of the Supreme Court potentially.

In addition, you're also seeing things like cognitive decline. There are skillsets that people are experiencing decline in, but there are new skillsets that are popping up. So what does that mean for society? What does that mean for general human learning, for human intelligence, so on and so on? There's a lot of stuff out there that's popping up, and they run the gamut from the things that are protective to the things that are unlocking potential to the things that are just things that we never even thought about. And so there's a lot there that stands out.

Meanwhile, a lot of people worry about those risks, I'm the guy that wants to test and push and go further. I'm known to be that guy. I'll share with you one of the things that I love is I'm hiring a lot of emerging professionals lately, and I'm trying to be very interdisciplinary in the way that we go this. One of the things that stood out to me was I remember I did an article about a year and a half ago, about should candidates be allowed to use ChatGPT to develop their... What's it called? Their cover letter and their resume. The whole thing is how do you know should somebody disclose that they use ChatGPT.

I had somebody apply for a role, she's a data scientist from the University of Maryland, brilliant person, I mean, brilliant, brilliant person. I felt like I was meeting a trillionaire in the making. She submitted this cover letter and this cover letter she admitted that she used... not only admitted, she actually was proud of it, and she said, "This is what ChatGPT told me about applying for this role and how I would be different, unique, distinct, and how it is that SHRM would help me reach my ultimate goal of being a super tech contributor across the planet." I was like, "Holy cow, talk about turning that problem completely on its head." I thought it was brilliant. She even showed later on what her prompts were, she shared it with me, her prompts for how she goes about changing or looking specifically at 10 to 15 amazing business ideas. I'm like, "You should not be sharing."

Aly Sharp:

Keep that to yourself.

Dr. Alex Alonso:

They're going to be great. You need to keep them to yourself for now and then keep going.

Aly Sharp:

That's awesome. I mean, I recommend that students just use ChatGPT to edit or maybe add flare to their resume, maybe not write the whole thing. But I definitely think it can also be helpful for interviews when you put in the job description and it can pop out questions that the interviewer might ask. I know that was always the most stressful part, trying to figure out what this person might ask me, so that's another way to do it. I also hate writing cover letters, so more power to that girl because she knew what she was doing.

Dr. Alex Alonso:

Well, Aly, it's funny you mentioned that because one of the things that stands out to me is if you want to really talk about forward-leaning, we at CEO Academy partner with Wharton at the University of Pennsylvania. Wharton is the big, well-known commodity in all business schools, it's the most famous one in the US and probably in the world. One of the things that stands out is they have an instructor there... not an instructor, a professor, a faculty member. His name is Ethan Mollick, and he's been on CNN and everything talking about will we ever be able to put the genie back in the bottle, that kind of stuff, that was his initial stuff. His answer is no, but more importantly, we should stop calling it a genie because its actually the next most important thing.

So what he is doing is actually cataloging research. He talks to each of the LLM providers, the companies, the ChatGPT, OpenAI, he talks to all those companies about what their next development is. Every week or so he actually has meetings with them. One of the things he talked about was he's seeing now that he's actually turning to companies and how they're using it, and there's a movement now to incentivize it. You asked me before about understand what it is that employers' strategy is or what their perspective is on it, there's a couple companies out there that have actually created an in-company X prize that says, "If you come up with a great business idea that will lead to a certain amount of revenue for our company and we can execute on it, you'll get a $10 million incentive for engaging that and being the person who started it up."

Think about that. Imagine being the person that comes up with the next big idea here at SHRM, right? You create not podcasts, but quadcasts or whatever the next big thing is, right? And all of a sudden, because you thought of it, you get a million dollar bonus for doing that. That's kind of cool, right?

Demetrius Norman:

The return on the investment. I think that the thing that comes across for me is it levels the playing field, it gives you additional resources, access, and it helps to expand your thought process when you're working to create a cover letter or to be better prepared in any environment. I think the other piece too, while you alluded to this, that it's across every industry, that every industry is going to have to be prepared for the inclusion of AI into how they do business, how they do work, how they onboard folks. The key to it is just making sure that we're paying attention to all of those signs and preparing people to make sure that they're able to make that shift into the wave of the future, so that's all good stuff.

Dr. Alex Alonso:

And we haven't even talked about the new industries it's going to create.

Aly Sharp:

Oh gosh.

Dr. Alex Alonso:

Yeah, it's a whole other...

Aly Sharp:

These conversations make me so stressed.

Dr. Alex Alonso:

Well, no. I think getting back to the fear and the myth that it's going to take away something, I think we forget about the stuff that it's going to introduce to the market and the stuff that it's going to help expand and to make our lives easier. I love having these futuristic conversations because it just goes to show... I mean, if you think about our parents, I'm thinking about my mom and dad, they were listening to music on the record player. And then we had cassette players, CDs, and then it just evolved when you got the iPod to come out and all of these other things. So there's this constant evolution that's happening, and while there's initial fear, when you look back on it's just like, "Oh, wow. without having that bold purpose, as SHRM talks about, we wouldn't have been able to make these changes by leaps and bounds."

Aly Sharp:

I definitely agree. And that ties into our next question of how will students be required to become ethical AI guardians?

Dr. Alex Alonso:

Ethical AI guardians, I don't know that students will be required to do that, but I think that there are three clear things that will test our ethics around AI, if that helps.

Aly Sharp:

In general, yes.

Dr. Alex Alonso:

I would encourage students and or emerging professionals to get very familiar with this. The one that everybody knows about is deep fakes.

Aly Sharp:

Yes.

Dr. Alex Alonso:

Deep fakes is something that will test our ethics across the board, and I've seen the entire gamut of it. We all know about the taking somebody's identity, we all know about that one organization that authorized the $20 million loan payout, and it was a meeting that didn't even happen. It was the CFO was called into an executive team meeting, and everybody else who was in it was a deep fake and they were convinced. And so they actually authorized the payout, which was problematic all the way around. We know the negative side of it. That's probably the most extreme case to this point. I myself have actually created an Alex deepfake. So believe it or not, I've created one in large part because I wanted to be able to demonstrate that it's very easy to do. I actually have created one, and if I can build one, that's not a good sign, a lot of people can do that.

I worry about the identity theft, about the authorization of things, but at the same time, I also look at the positives of it. If you want your CEO or CHRO to be able to do an orientation or you want your executive team to be able to do an orientation and update it over and over again without having to film them and without having to do all those things, there is a case to be made for creating a deepfake that is used in limited cases, where they basically are doing those types of things. And you can give a special welcome to your new staff every time that feels completely customized but is actually not really fully blown customized. I'd argue that's one approach that I'm seeing in the market that is actually interesting. Deepfakes is one example though, but you run the gamut all the way around.

The next issue that I see that is interesting is really around trying to determine what is, again, not the intellectual property, but how do you deal with what they call spurious creation or spurious linking. And so what I mean by that is hallucinations, right? Hallucination rates have dropped from what the original launch of things like generative AI. To give you context, a hallucination is basically that it comes up with a response that never actually happened, that isn't real. And because it sounds so convincing, it comes so clearly it ends up happening. The most famous case around this is actually there's a lawyer in New York who actually submitted a brief, it was one of the most technically sound briefs ever based on 17 cases, and none of those cases ever happened ever in any way, shape or form. So he was not disbarred, but he lost his ability to practice law for a while.

What's intriguing about that is that kind of spurious stuff has to improve. The issue is large language models tend to be designed to be unpredictable, and most people thought, "Well, the unpredictability is it's never going to give me the same response twice." That's true, but it also means that you're never going to get the perfect response or a fully accurate response over and over. And it's dependent upon two things. One is, what is the world of information that exists out there? And if you don't know, believe it or not, on a daily basis, the average amount of data that exists is doubling per day. Per day, right?

Aly Sharp:

I just have to say I picked a good graduate program to go into.

Dr. Alex Alonso:

You really did. The other thing it depends on is how well the algorithm is learning. And so that learning is also a piece that happens, which speaks to how well those spurious things are taking place. The next issue that exists out there, which is bad actors. There are people who are going to be bad actors, people who are problematic in some way, shape, or form, or have an incentive to train the models to be inaccurate or to cause inaccuracy. In the world of espionage, it's imagine one country is trying to attack another country through cyber, and they recognize there's a dependency on a large language model, so they train that model with bad information. That's country versus country, right? Imagine company versus company and competitors now and doing those types of things and recognizing that there's a competitive issue. It sounds farfetched, it sounds like the plot of a movie, but believe it or not, it is real, some of it is actually happening and it's something that we're seeing over and over again.

What this speaks to though is an opportunity for young emerging professionals. I know I shouldn't say young because emerging professionals, they can be of any age. What I want to speak to is there's this notion of authentication and verification that is going to be a huge industry as it relates to generative AI and AI in general. A huge thing. The funny thing is you don't have to be the traditional what we think of with a digital native, you don't have to be a technocrat. You don't have to be someone who is a total technologist to do that. You just have to be somebody who is ethical and can develop the skills. That's it. It's about creating that generation of generative natives that we're talking about, and that is really the next phase.

Demetrius Norman:

Wow. No, it's all good stuff. With that, this is a conversation I believe we can go on and on. I just want to take a pause for a second before we ask the last question and just take care of a couple of housekeeping items. First, for those who are listening to this podcast and you are seeking professional development credit, this program is valid for 0.5 PDCs for the SHRM-CP or the SHRM-SCP. The code to redeem for your PDCs is the number 25USNJY. Please note that this code will expire April 9th, 2025. Again, that code is the number 25USNJY.

Now, as we return back to the podcast, and I know Alex, you touched on this, so the one final question that we had as we wrap this episode up is, for the emerging professional that is considering HR or beginning their career in HR, what general advice would you give to that individual in terms of what they should be aware of as it relates to AI?

Dr. Alex Alonso:

So two things. The two things that I would encourage people, it's sort of funny, is develop aptitude and comfort with generative AI, first and foremost. Go test it out. Go develop it. Turn your phone off in terms of Google, all internet browsers. Don't use them for a month. See what kind of impact it has on you. Use it. You replace it with ChatGPT or with another generative AI like Llama or Copilot, or take your pick. Do those things, learn how to search using that, and you'll already be way ahead. Many folks are already doing that. The whole notion is that you're going to get to the point where you're doing search with a generative AI tool, you're not using things like your traditional, ask a question and see what Google comes up with.

The other thing I'd encourage people to really focus on is identify what it is that you can do to develop your prompting skills. Everybody talks about prompting. I get it, you're interested in HR, you want to understand prompting, but you also want to understand what is the best way to develop prompting skills, because you're going to be asked to do that for a lot of your organizations when you go into HR.

And then the third thing that I always remind people is be comfortable developing job rotation skills, especially if you're in HR or looking to go into a career in HR. Be comfortable developing that and practice doing it with understanding how ChatGPT or any generative AI makes an impact in doing that work, but also making that work possible for others. So think about it this way. Go in and start creating guides for how you would take this role, how you would use this role, and more importantly, what it is that you should be looking for when trying to hire people for this role. Think about how AI can make you better by putting yourself in the shoes of the hiring manager, by putting yourself in the shoes of the person that has to do the job. And all of a sudden you're going to be much more successful at creating cultural alignment, technical alignment, technical proficiency, and really understanding the nature of what work is.

Aly Sharp:

I do want to flag... Sorry, just one thing. I do also love it, but just as a warning to those going into HR, if you're using an open source platform, please don't upload your company's database in there because that will be everyone's information.

Dr. Alex Alonso:

Data privacy will go away. And just for instance, if you were to use Google's platform today... What is it, Gemini? If you were to use Gemini, what ends up happening is it de-identifies who uploaded the things, but it will put your things out there over and over again. So it's never going to be protected. If you do it with ChatGPT, there's no assurance, and it's hard to know what's in that large language model. If you do it with something like Copilot, there is an enterprise version that allows you to protect those things. So if your organization is using Copilot, that's great. That's probably the industry leader in terms of data privacy.

What I will recommend, and Aly to your point, one of the things that you should hear about in the future when it comes to AI across your organizations is, what are they doing to create limited language models? What are they doing to create those smaller use case sets and those smaller repositories? Because those are going to become much more valuable and they're going to become really interesting. There's a lot of companies out there I can think of like the Marshall Goldsmith company is out there doing this, and they're creating what they call Marshall Bot, which is their limited language model for coaches and for executive coaches. So I think it's a really neat thing and there's good guidance in that.

Demetrius Norman:

Well, listen, thank you, Alex, for you, one, contributing to this episode and all of the information that you've provided. It has been a pleasure to have this conversation with you. And one of the things that I failed to mention is Alex is also an author of several books that are available on our website. I want to say one of them is The Price of Petty. There are a couple of other ones that I have. I can't remember the titles off the top of my head.

Dr. Alex Alonso:

[inaudible 00:33:50].

Demetrius Norman:

Yes, yes. So make sure you go to the SHRM store to check those out. And again, Alex, thank you so much for taking the time out of your schedule to chat with us today.

Dr. Alex Alonso:

Thank you, Demetrius. Thank you, Aly. It's really been a pleasure to be here, and I really appreciate that this is the first ever time I've been on a SHRM podcast. What a hit. Thank you.

Aly Sharp:

That's so awesome. With that, we're going to bring this episode of Career Compass to a close. Thank you all for joining us, and we hope you stay with us throughout the rest of the season as we discuss more topics like this episode.

Demetrius Norman:

And for more exclusive content, resources, and tools to help you succeed in your career, consider joining SHRM as a student member. You can visit us at shrm.org/students to learn more.

Aly Sharp:

And lastly, if you're looking for more work and career related podcasts, you can check out All Things Work and Honest HR at shrm.org/podcast. Thank you again for listening, and we'll catch you on the next episode of Career Compass.