
4D Human Being Podcast | Live and Lead with Impact
Are You Happening to the World or is the World Happening to You?
Welcome to the 4D Human Being Podcast, where we dive deep into the world of personal and professional development. Hosted by co-directors Penelope and Philippa Waller, this podcast offers a refreshing blend of insightful discussions, practical advice, and transformative strategies.
4D Human Being bring you the very best in communication skills, leadership development, emotional intelligence all within this very podcast, inspiring you to become a more empathetic, focused, and successful leader.
Whether you're looking to elevate your personal WellBeing, enhance your professional impact, or explore the profound joy of connecting with others, the 4D Human Being Podcast is your go-to source for fostering growth and navigating the complexities of the human experience.
Join us as we explore how to thrive in all four dimensions of life, and not just be a 3D human doing, but a 4D Human Being.
4D Human Being Podcast | Live and Lead with Impact
AI… How do you feel about it?
What if, instead of reacting with fear or overwhelm, we chose to meet artificial intelligence with curiosity and intention so it helps us become more human, not just more task driven?
In this episode of the 4D Human Being Podcast, Philippa and Penelope Waller invite you to step into the conversation around AI with clarity, self-awareness and choice. As the world evolves faster than ever, this episode explores how to stay grounded in your human skills, lead with presence, and approach uncertainty with a growth mindset.
Together, they unpack:
- Are you in competition or collaboration with AI?
- Why self-trust is a more powerful compass than certainty.
- How AI can serve as a tool for creativity and collaboration.
- The mindset shifts that support resilience and adaptability.
- Behavioural science insights for building new habits.
- What it means to stay fully human in a tech-driven world.
Whether you're feeling cautious, curious or committed to AI, this episode offers a reset point to help you align your mindset with the moment. Because it’s not just about understanding new technology, it’s about choosing who you want to be as you meet it.
Hello, my name's Philip Walla. My name is Penelope Walla, and we are two of the directors at 4D Human Being. And welcome to the 4D Human Being podcast. What's it all about, Pen? It's all about your personal and professional relationships, it's about your communication skills, how you lead, how you work and build teams, how you are looking after yourself and your well-being, and how you are much more at choice. What do we mean by that? Well, sometimes we can get a little caught in patterns in life, and we can all be a little bit on our automatic pilot. So 40 human being is all about helping us get back to choice and being a four-dimensional human being. And your fourth dimension, of course, is intention. So whether it's about your impact, your leadership style, your team dynamics, whether it's about your well-being, whether it's about your communication or your presentation skills, anything that involves human beings interacting with other human beings, 4D Human Being are here to help. We're gonna take a deep dive and look at some tools, insights, theories that are gonna help you go from a 3D human doing to a 4D human being. So that you can happen to the world rather than the world simply happening to you. Recording. Recording. Recording. I'm gonna clean my glasses. Oh, we can say clean my tea. That's not the moment to clean your tea. Done that. Done that. Um the preparation for our podcast. I was just saying, wasn't I, Phil, that um these should be the moments when we are sort of the most camera ready. Camera ready is the thing. I think is the friend. Whenever we whenever we do a podcast, because I'm not with a client, um I feel like, oh, a bit of downtime. Yeah. And I realise that I don't always look camera ready, Phil. Yeah. It's uptime. In fact, you do you know what that do you know what makes me think of? I don't know if it's just us. You got lipstick on your teeth. That is definitely, that is definitely not uptime, is it? I mean, this is this is the thing about having no wardrobe or makeup, is it department? Like it's just we're self- it's self-reliant, which is never good. As you get older, this is I mean, it's not a good thing. But also, also, even if I go through the process of I usually go to what it's downtime, then I look, then I look a bit scruffy, and I usually do look scruffy on the podcast. Even if I do think, oh, I'll be on camera and make a bit of an effort. My third thought is, I don't care. No, I know. Well, that's that is the joy of getting older, isn't it? That is the joy. However, I'm gonna I'm gonna link this not to the topic today, because we're gonna be talking about AI today. Don't switch off because it's gonna be interesting. Anybody who goes, oh, AI, I hate AI, we're gonna talk about that. We're gonna talk about humans as well. I'm gonna start with another another A-word, which just is you know when you know when you get a theme, and then it's like everybody is sending you links from all your different areas of life that are don't know each other, and authenticity has just been flooding in from every a friend of mine, it colleagues of ours, it's clients, it's just it's and it's so interesting. This this idea of the reason I say it is being on for camera, this idea of what do we really mean by authenticity, and I th and I think it's really being challenged in a good way at the moment.
SPEAKER_01:Agreed.
SPEAKER_00:This whole bring your whole self to work, be authentic. The danger from if we talk about it in 4D language is authentically, you're allowed to let your emotional self drive the bus all the time. And actually nonsense. It's nonsense. And and this whole thing around, well, otherwise I'm masking, I mean, it's such a good conversation to be having now around the scale that we're all on all of the time. If I go down the shop, I'm not presenting the same as when I'm on my sofa with my dogs. Like it's all the time, and that's not because I've got to be somebody else. No, it's just different parts of it. It's just different parts. Well, I mean, on this, we're we're going off on a slight tangent, we will come back to AI, but um, and I'm sure you'll do a super segue film. Yeah. But I watched the first two episodes of a three-part Victoria Beckham documentary. I do, I do love her. I've always really liked her. And I hate, I hate seeing all the horrible things that were said about her. Anyway, point being that she really speaks to this in as much as she speaks about not knowing who she is, not understanding who she is, trying to sort of find herself, trying to redefine herself, and you know, through her different stages of her of her life and her career. She didn't use these words, she was she was much more talking about finding herself. And I I thought it's not about finding yourself. Good luck with that! Exactly, there's not a thing there to be found. But but what she did demonstrate through through her narrative about her life is is how much she has evolved and changed as a person. And and I don't mean that only in terms of growth, although I'm sure I'm sure she has, but in terms of how she's been very different in parts of her life and how she's had to show up. When she talks about when she was she wasn't really working when the spice girls had finished, and she was um sort of a pure wag, she was a wife and girlfriend of a footballer, and she was all you know, glammy, and she was all kind of out there, but she hadn't been like that before, she's certainly not like that now. Yeah, and and she talks about these kind of transitions in her life, and I thought, yeah, it's not that any of them weren't really you, they were just the parts of you that were needed at that time. That is exactly right, and actually, I am gonna start a segue that is such a good way to frame this conversation around AI, that as things shift and change in our environment, in our relationships, in our lives, in our stage of life, you know, if we're fixed on this idea of, well, my authentic self would always be like this or always do this, we are completely, well, it's a sort of nonsense thing to say, really, because we are constantly evolving. We always say, when did the Big Bang start? When did it end? It hasn't, we're in it. We're constantly evolving, and exactly, exactly like that. AI is another version of the environment changing, a new, it's like living in a new house or starting a new relationship or starting a new job. And our choice at that point is to meet the world and to make conscious choices about how we want to be with that. Because Victoria Beckham, you know, as she evolved through life, at some point, whether she was conscious of it or not, kind of like, this isn't working for me anymore. Like, I don't want to be out on on and you know, in the newspapers in this way. At some point, she she changed, she got more private, she kind of I I imagine got more grounded in herself. And that's exactly what I think is really I feel so interested and excited about talking about this, because the easiest thing to do with potential change and uncertainty, or a new piece of tech like AI, the easiest thing to do is to go to the polarisation of it's terrifying, it's frightening, it's the enemy, I'm not going there. And it's nothing to do with me. And of course, that's the easy thing to do in a relationship. I'm not gonna have it, I'm not gonna have any because I I don't I you know I I can't possibly know what will happen. Uh it's gonna bring things that I can't control. So the easiest thing to do is to go to the polarity of fear and avoidance. And actually, we've got a much more interesting opportunity here. So we're gonna talk about all of that. So attitude. We'll also reframe in terms of why we think AI is even here. Yeah. As part of, exactly as you said, as part of the development of ourselves and our universe. Exactly as you said. I mean, if we recognise that it's part of it, it's not like it's not like an alien spaceship has come and landed on Earth. Although I think some people might feel like that. That's what I mean. Some people think like that. It has evolved within our society from us. Yes, yes, and if you're part of our system. Exactly. And if you if you've ever used a search engine to try to find information, you are part of that collective. I want to so I really want to make a caveat, a sort of note up front, that we are definitely not sitting here as tech AI industry experts. I really want to make really, you know, because it feels important because otherwise we're gonna be. You and I are gonna be falling down the AI cliff very rapidly. We're not gonna be using, we're not gonna be using AI language. There's a real there's a really good podcast, John Stewart's podcast, and he's interviewing, um, I'm gonna say the name wrong. I think it was Geoffrey Hinton, but it's somebody who is like, he's um Professor Professor at Toronto University, and he is the like the the guy in in So go there. So sorry if I've got that name wrong. Um but listening to John Stewart interview him is just brilliant because John Stewart has not got the vocabulary to even begin to understand uh the AI. But he asks really, really brilliant, curious questions. It was it's a re it's a really good listener. Okay. So we can all get curious about it, even if we don't understand it. Well, I think that's probably one of the starting points, yeah. So we are definitely looking at it. We're gonna do two podcasts on this to start with, and we're definitely looking at it from the perspective of mindset towards it, and checking in on patterns and blockers and your kind of default position because we're all about intentionality, yeah, we're all about choice in all of our work, so we're applying that to our attitude towards new tech coming in and all that that brings. And then the second piece that we want to look at is okay, well, let's look at what it can do, let's look at the relationship and what it could actually bring us. So really practical changes. Maybe one or two of the legitimate fears that definitely. In fact, I've you know, exactly as I've been really sort of looking into it, looking at exactly what are the risks and dangers and how can we mitigate for that and solve that. So it is a huge topic. Okay, I'm gonna we're gonna I'm gonna start with maybe it's about my own feed, it's about my own social network, both real world and on. Real people. Real people in the real world. But there's definitely a bias I feel, and I've definitely historically felt it, that's changed, but we'll get to that. My experience is definitely a bias towards the fear and the negative. Like that's what I feel. And when I read up about it, I actually don't think that's just about my social network. I think that is quite common. That is quite common. That's that's where a lot of people are. And as you say, there are some reasons for that. But I think it's definitely even when we're talking with clients, even when it might be that there's massive, obvious benefits, or they're even sort of near that part of the you know business sector. Part of what we're being asked to do around change and uncertainty is to have conversations around how you manage and face the the you know on the inevitable oncoming of AI. And in that, I think unavoidably there's a feeling of ooh, oh, it's coming, and how can we help people deal with it? So it's it's it's it's sort of in bed, it's sort of embedded in. Yeah, I I would say there's a spectrum in terms of how we feel about AI and our attitude towards it. And obviously, we'll start talking about the positive elements in terms of curiosity and excitement, but in terms of that more negative side and the fears, I think there's also a spectrum there in terms of if I was to think about my bias initially, and I again similar to you, I've definitely shifted, but I don't know if my bias was so much in fear, partly because of the work that I do and what I think about the impact of that over the next decade or so, and I might be wrong, but I think it was more because perhaps of my age, because of what I do, I more sort of thought it wasn't relevant to me. So I didn't really need to engage with it because that it was for other people to do that. That's so interesting, isn't it? So I don't know if it was a fear, but I think if wherever you are on that spectrum of avoidance, irrelevance, disinterest, fear, that is perfectly normal to have to have those feelings. You know, we all have feelings like that when when we're going through change. So you you may you may be frightened about it, you may be uh avoidant about it, you may kind of be disinterested. All very normal feelings to have around big big change. Yeah, so that is our starting point, which we would talk about in terms of communication, difficult conversations, uncertainty, um, yeah, change. That we would talk about that mindset, you could talk about the fixed or growth mindset, of course, but we look at it from a space of defence or fear, avoidance, or avoidance, or helplessness, hopelessness, or that victim position of it's happening to me and there's nothing I can do, I'm powerless, to curiosity. So certainly, if we talk to our own experience, there was an initial, well, we don't really deal with that sort of thing. That's not really we are for the human. We're for the human, we're dealing with humans, and and it was our fabulous colleague Matt, who sort of, yeah, very gently, I think you've got it. Yeah, I've been playing with this, we might want to take a look. Well, I'm I'm and introduced it in a very like a small way, not yes, in a kind of focused way, I would say. And and let's kind of talk about what he did. You know, he's wonderful, he he knows us very, very well, and he knows we're very curious and interested people, but he also knows that we're unlikely to sort of independently and autonomously go and uh deal with a bit of tech. Go deal with a bit of tech. So um what he did was an absolute classic kind of manoeuvre from change theory. Change theory. If no, if you haven't read the uh the book Switch by um Chip and Dean or something? Chip, I'm gonna say Dale. I always get confused with the authors of that book and the chip and the chipmunks who are called Chip and Dean. Sorry, sorry for the authors. The Disney chipmunks are called Chip and Dale, and the authors of that book are frighteningly similar. Anyway, the book is switched, and there's three elements to change. One is the rational, so facts, information, all that kind of intellectual stuff that you need to understand, what you need to do, what's ahead. The second one, which is much bigger than we think, is the emotional piece. How do I feel about this? Exactly as you've just spoken to, Phil, the fears and avoidance. And the third one, which is my favourite one because it's the one we overlook, is the path. So you've got the rational, the emotional, and the path. Uh, he calls them the the rider, the elephant, and uh the path. The path being that if there if the processes that are in place are not changing, so think about this as an organization, it's so important that I go about my work and I've got a kind of set way of doing things. Nothing is changing in terms of my meeting schedule, the processes, the tech I'm using, the kind of reports I need to send, all of that kind of thing. So the path remains fixed, yeah, but someone's telling me I've got to change my life. Yeah, yeah, yeah, yeah, yeah. Well, I'm just not going to. Yeah, I'm just not going to. So the one of the easiest ways to affect change is to change the path so that people have no choice but to go down it. I use this really simple example. When I first read this book, it's got some good examples in there, and I had a little play with things. So things like if you watch your children, if you have children or nieces and nephews or godchildren, if you watch them playing, um, you'll notice that they are responding to their environment. The environment is part of your path. So they'll play with things that are in easy reach or that they're used to playing with. If you move their furniture around in their bedroom and move toys sort of higher and lower in different places, they will start playing with different toys. Yeah, it's the old put your put your running shoes by the front door. You don't have to say um I'd rather you played with these toys, you just set the environment up. So um Matt used the theory, the concept of the path to make it almost unavoidable for us to try something new. Yeah. Was brilliant. So what did he do? Yeah, what did he do? So he he told us about this um this AI piece of software, Claude is actually the one he uses, which is from Anthropic. Um he told us about it and said he's been using it and he's loving it, and we were like, Yeah, yeah, yeah, yeah. Carry on. As you were, as you were, yeah, and that went on for a few weeks, and he kept talking about it and sort of saying, Oh, and Claude suggested this, and we were like, Oh, Claude's Claude. We used to say, Claude's your new best boyfriend. Yeah. Anyway, he could probably see that we weren't, we weren't suddenly setting up a Claude account, but you know, we were listening to him, but we weren't changing anything ourselves. So what he did is we were on a meeting and he sent us a link to he told us exactly how to sign up, and then he sent us a pre-written prompt, yeah, and he said, copy and paste that in to Claude and see what happens. Yeah. And we were like, Yeah, wow. Yeah. And that was it. That was it. I think you spent the next 24 hours on it. Yeah, yeah. Do you know what? I'm enjoying it now, but it's but it's definitely sat in a place of it's sort of there, and I sometimes I might go through a sort of afternoon of using it, but then I might not use it for a week or so. But I've definitely it definitely shifted, definitely opened something up for me because really what he did was open the door to actually this is accessible. It it's accessible and actually touching into something real rather than an idea of what it is. Because I think that's the thing about that conceptual piece, but that it just reminded me of that the difference when we're talking about the news or something that's happening when we're actually what are we doing? We're taking our dog for a month or doing our work. So that was a very it put me in contact with it in a different way. That felt tangible. It felt tangible, and that doesn't mean I'm not sometimes thinking, oh my goodness, what's this doing to the environment? Like all of those things are still very, you know, relevant. But I think what what came up for me as I was reading more about it as well was the human piece, which is what we're always interested in, the psychology piece, that it's much easier for us to deal with certainties than it is uncertainties. Now, the uncertainty of AI is vast and complex. It's evolving, it's changing, it can already do so many things that we don't really understand, it's going to be able to do more. But even even the people behind the technology don't know exactly where it's going to go. If we if we even begin to think about how it might apply to our own life or job, or then there's the threat. Will it mean I lose my job? Will I be replaced? If I do use it, what can it do? I I don't really know. It's it's this unknown. So much easier for us with an unknown to make it into a known terror, yeah, a known fear. Yeah. Which is a I just think an interesting starting point to kind of crack something open. That I think as I mentioned earlier, it's much easier for us to put everything, it's it's like othering, isn't it? It's like if you know if we don't know that person across the room, we it's much easier to say, Oh, I I probably I probably won't be interested in them, rather than walk over and risk social rejection. That's right. A conversation that might not go how you don't know what that conversation is going to be. So it's all uncertain, uncertain, uncertain. Well, this is I mean, and this really speaks to something that we talk about so much, which is that we are constantly putting narrative to things. Yeah, it's what it's what our brains do, you know. Something happens out there in the world and we create a narrative around it. And of course, we think that that narrative that we've come up with is the truth. Yes. That's what's happening in the world. Sometimes, of course, that's true scientifically. There are things that are very, very true, and we know the sun will rise. The sun will rise. Um, hopefully. But of course, we're also putting narrative to things that are uncertain, and and we we do it because we are desperately trying to make meaning out of out of the world, and we are also desperate to create certainty. And the way that our brains do that is to put a narrative to something. I don't like that person, they won't like me. AI is going to be like this. Now, it's not that that could be true, it might not be true, we don't know. But we don't like that. We don't like that. We like to have a narrative, so we we've all we've all created different narratives about politics, AI, the future of the world, and there'll be truth in all of these narratives, but we are holding on to our own narrative to make it feel concrete. Yeah, and you know what it brings me to, which is one of my most favourite, favourite kind of you know, what do you call it, like tent pegs or tent poles or sort of you know, touchstones in the world, is the opposite of uncertainty, is not certainty, it is self-trust. And the other thing it makes me think of is that documentary on Netflix, stolz, and he talks about the string of pearls. That's all we can do in our lives is put the next pearl on the string of pearls. Like, but actually, what can happen when something new is happening is we go, oh, am I gonna am I gonna make a whole necklace? And actually, you just need to put it's tell us like just take the next step, and actually that starts to be interesting. So, when we think about the fear of AI, I think we're looking at what it's gonna take away from me and what it can do that I can't do. And that's a very particular mindset. And again, if we pull back for a moment and we take AI out of the equation, we might have that mindset to almost anything, to almost any change, to almost any change or any any unknown. We might immediately think that's gonna be a problem, it's gonna detract, it's gonna it's gonna diminish my life experience, it's gonna make life harder, and it's going to be better than me. Yeah. So how should you say that, Phil? Because I I had a conversation with a client last week. You know, a friend of yours meets a new friend. Are you thinking, that's great, I can't wait to meet them, or are you thinking, all right, who was nice? That's right. What's the underlying attitude? Because I had a conversation with a client last week, and we are we are looking specifically at developing that sort of let's call it a growth mindset around AI in terms of sort of implementing and embedding some different AI strategies and technologies. But of course, as the And the attitude that people have to be. The attitude that people have towards it is the main thing. But of course, as the client said to me, because I said, Is it is this purely about AI? And they said, Well, let's use AI as the vehicle around this, but of course, from their experience, there are other things that are changing in the business that have uh people have a similar attitude too. So, exactly as you said, that's the underlying attitude. Yeah, you are you really made me think when you said that, and we are not gonna go down this tunnel. Then don't offer me a tunnel gateway. But I do sort of have a belief that the universe keeps throwing us things until we learn lessons. Yeah. And it isn't it interesting that we're not brilliant as human beings at sort of not all always stepping healthily into change and uncertainty. You know, it's something that's very hardwired from many, many years ago. I wonder if the universe just keeps sending us change until we learn how to deal with change. Well, you know, exactly, and then we'll just and then and then we'll all become floating light particles and we'll be done. Which it was actually, you know, enlightenment. Well, whatever. Um yeah, absolutely. So what what and what's in what I think what's interesting about that as well? So yeah, first of all. Full turquoise, first. Yeah, full turquoise in the spirals. Well, it's I do you know what? As I was reading so much about this, about AI, I was thinking, God, it absolutely maps beautifully onto a developmental model like the spirals. That spiral dynamics. Spiral dynamics, and that actually we're talking to what they call second tier, which is this sort of systems thinking, this sort of collaboration, this interdependence, which is such a key word. When you start kind of looking into our attitude towards new tech and AI, we are historically, think about earlier levels of development, tribal, individualism, nationalism. We are historically unbeknownst to us a lot of the time. It's the water in the goldfish bowl. We are living and looking at the world from a perspective of competition dominance and hierarchy. So if you get that, I'm losing. I I don't get this. If AI does this, what you then I don't get, I don't get this. So even before we begin, the water in our goldfish bowl is already dictating our attitude towards something. Whereas when we think about exactly that developmental line, when we when we get into the when we can move and really understand it's all interdependent, we are, as I mentioned earlier, if I'm Googling for information, I am part of if I'm on social media, I am part of this process. And when we can start to shift from they've got that, I haven't got that, that's gonna do this, and I'm gonna lose, or it's gonna master me and I'm gonna be the the servant, as it were, or or I need to be its master, where however we're looking at it, and there's other things to talk about here around inequality and who gains financially, of course, in my eye, but let's just park. Which is a big concern. Which is a big concern, but let's park that for the moment because because that's that's been the same from when you know the Industrial Revolution and people working on in mills and factories and people swanning around who owned them in silks and plenty to eat. You know, that's never been any difference. Wingsmen were rampaging the land, taking all the guns. Exactly, exactly. And you know, peasants be damned, you know, and burning your burning your shack. I mean, like if that's not new, you know, that is that's the human element. I mean, that brings us on, as we'll talk about later on, or maybe in the next podcast, that brings us on to a whole other piece around the opportunity that this keeps mirroring back to us of who we are. You're gonna do it again. Yeah, you're gonna do it again, exactly. But so that's you know, that that's definitely there. But in terms of the hierarchy dominance when it comes to looking at new tech or AI, if we can if we can pull back and just even play with the idea, not that which I've definitely been through. So I went through, oh my god, AI, AI, don't go near it, it's gonna master me. And then I got to master nothing's gonna master. But then when I started playing with it, honestly, I noticed, wow, I can master this. I can master you. You're not up to much. As an assistant, you don't seem to be able to read my mind and know exactly what I want at all times. I think I think Claw's gonna get in line pretty quickly with you. But what was but what I definitely went through, I'm not going near it's just gonna master me. And then I went through, oh, it's not up to much. It's it's a bit of a rubbish, and it became, I became, you know, I became the and that so that what was so interesting. Because I I think it was your prompts, but it's my prompt. You think but you know, but that but that exactly there's the game. It should know, it should know, it should know. Yeah. So there's the game. So I was caught in that, and that, and so that was so interesting to step back and go, hang on a minute, what if it's neither, Master Servant? What if I'm neither, Master Servant? I'm me, I'm still me. It's it. It's it. Where do we where do we want to meet? What's useful here? And and and of course that brings me right to, and I'm s I mention it almost every podcast, brings me right back to Donald Winnicott, the psychoanalyst, and his phrase about, you know, the sign of the a a healthy, mentally healthy adult is the ability to play. If we think about that in terms of LA, suddenly it's the meeting point. In terms of AI. Yeah. Suddenly it's the meeting point of, oh, what's what's possible here? And I'll give you one example. I mean, one example. You know me, Penn, we've talked about this, alliteration. Love alliteration. And if there's one thing that AI can do, and you could you know definitely have to sort of shift and prompt it, but boy, does it have access to all the words that begin in the same letter. You know. So it even if you're j even if just on that, you know, that's a really good gateway. That's a really good starting point that is accessible. What's something that you quite like to look up or do or play with? Or takes you a bit of time. Yeah, or takes you a bit. I mean, I was I I was writing this piece for for this writing course I'm doing on 16th century uh woman who'd discovered the heart curing properties of digitalis, which is the foxglove, and she uh she didn't get recognised for it. In fact, she got burnt at the stake, supposedly, and um Dr. Withering, um, a guy took credit for it. So I was writing this piece. So I was writing this piece uh of a woman being taken to the top of a hill to be burnt at the stake, like you do. And I suddenly thought, and then I had to give her her own voice and dialogue as she was walking up to this sort of you know, traumatic sort of you know, witch. And I thought, I'm not quite sure. What would the person be called who would be leading her up there? You know, so I was like, what would a sort of village official and and I was like, ah, it would be the constable or the watchman or it would be the justice, and I was like, it's so yeah, it you know, it's just it's play. It's like give me dear Claude, give me some 16th century terms for you know, you're a bit of a dick. Because you wouldn't say that. Do you know one of my favourites? Do you know one of my favourite things which I sort of started to realise with Claude, and I have heard podcasters talk about this, because I was originally using prompts and we originally used it, didn't we, for some different writing pieces that that we've done, just just to help us in terms of time. I started realising that very occasionally, if I got really excited by an offer that Claude had made me, I would I would just didn't automatically type, oh Claude, I love that, and then you're in a two-way exchange, a two-way exchange of appreciation. That is that is a whole piece. I'm all over it. So, oh Penelope, thank you so much. Well I thought you're okay, okay. So anybody who hasn't, I mean, this is slightly going into the next podcast, but anybody who hasn't engaged with AI, if you're having a little bit of a down day, go on to AI, ask you to do something for me, you know, give to tell me the the date of the Battle of Waterloo, or it doesn't matter what it is, and then it gives you something. And then you can just type in, oh, could you adjust it slightly, or could you also tell me the date of, you know, when I don't know Battle of Trafalgar? And it gives you that, and you go, oh, that's great, Claude. You know, I'm what do you think what do you think about my ideas for XYZ and it? But that's brilliant. I that what a great idea to to check these dates and you know, thanks for helping me out, and I I'll make sure that I've it says things like it says things like your suggestion to XYZ was really yeah, yeah. Your question to ask for the for the next day was so so thoughtful and intelligent and link and you're going, oh come on, tell me more, tell me more. So I don't think you can start. I mean, there's a I mean there's a terrible sort of narcissistic sort of mirroring relationship here, but you can sort of you can sort of teach, you can teach Claude or whoever, and there are other AI um you can teach them to stroke and strokey strokey. He's like, hmm, do you think I'm funny? Oh you're so funny. So funny, funny funny. Anyway, that's a whole that's a whole other piece that is sort of funny and well, just try try and limit yourself to having some. But you know, but this this absolutely brings us to one of the important pieces which we'll definitely go more into in the second podcast, which is all of the time that AI can free us up to be in relationship in a more real way. That for me is, I mean, we'll definitely talk more about that next time. The but the because if you I mean it's just making me laugh now. If you think about how many people you've had conversations with, coaching, therapy, who struggle in relationship because we're so distracted, we're so busy, we've never got any time for our friends, family, partners, and kids. The invitation that is now potentially available to us to free up that time to do to be more human and connected. I mean, that's what I think is very exciting. Definitely as part of this programme we're designing. Um I I will get the teams to think about AI as this incredibly um sophisticated, becoming ever more sophisticated black box in the middle. And one of your jobs is to think about what whatever you think your job currently is, you've got to pull your job spec either side of this black box for all of those creative input conversations and ideas and all of those creative output and application uh conversations. So you've just you've got to shift your conversations to the to these pieces, which I think are more relational and more interesting. I don't know if I don't have to have a conversation about how I join the dots in XYZ process anymore. So I've got ideas and applications. Absolutely, and this absolutely brings me on to one of the sort of ping moments for me with this is going back to our attitude towards AI and oh, it's gonna take my job away. Now, there's gonna be some aspects of that, although, as I've said before, the lace weavers of Flanders felt that when the loom was invented, and of course, what happens is other jobs get crazy. Like we know this, we've said it a million times. What I think is so interesting about this from a 4D human being perspective, again, this this paradigm that was definitely what was happening in like the Industrial Revolution, if you think about yourself in a mechanistic way as a machine and a task completely, then yes, AI is a threat to that. But if you think about yourself as a human being who has all these other qualities and abilities and relational aspects and creativity and that sort of unknown, then you're not in competition in the same way. So, and that's why I think it's interesting, exactly what you just said. If you can split out the mechanistic pieces, you are more than a machine. And I think we've often, again, it goes this goes to Ian McGillchrist's work on the left and right brain, the master and the servant. We've we've often started to identify our who we are with what we do, with our ability to complete tasks and the satisfaction of the world. Because that that was what was needed in that evolution. Because that's what was needed. So so there's there is this massive opportunity for exactly for that for this evolution of I'm more than that, I'm poetry, I'm love, I'm connection, I'm relationship. Totally. I can I'm feeling, I can help people feel a certain way. That's unique to us. Well, well, and you know, for anybody who's who feels like, and I totally understand, who feels like you know, this is AI is a thing that's happening to us, going back to something you said right at the beginning, it's part of us, it's coming out of our evolution. And if you think about where we have evolved developmentally because we've needed to, because of the the way that society and industry has been constructed and the kind of jobs we've had to do, we are not going to stop evolving as human beings. And it's it's sometimes we just love a brain. Sometimes you just think, do I have to have any more lessons? Can we just can we just press pause on the evolving? I'm done. I've done, I think I've reached, I've re- I've reached a level. I'm quite happy with this. But we know it's not gonna happen because it's it has ever been thus. And these things that happen in the world, you know, such as the Industrial Revolution or AI or whatever it might be, they are co-arising with us. Social media has co-arisen with us and our involvement, it's reflective of who we are, and it and we we can think it's other than us, it's it's not, it's part of who we are and what we are co-creating, and AI is no exception to that. And you know, if you believe in this kind of thing, there's a reason why these things evolve and how we we co-develop and co-create with them, and AI is no exception. We are we are going to take our thinking, our behaviour, the way we think we need to be as human beings, what's going to be valued and important to us. We're gonna have to take it to a different level. Yeah, and that would have happened with or without AI, it might have taken longer, but AI has co-arisen because that is part of our developmental journey. Well, I said to you earlier, part of the reason us humans are where we are is because of the evolution of our brains, and this mass of meat in our heads has gone to it's gone so far, so we're outsourcing it. Do you know what it made me think of? As you were talking, I thought it was such an interesting thought around we are part, not even we're part of, we are all this evolution, if you like. And it kind of brings you back to the word interdependence. And I was just thinking as we were talking, where do you draw the line of interdependence? So if you go internally, you go, well, my cells and my organs and my blood and my body, you know, they're definitely interdependent. Like, I wouldn't split them out and say, Oh, you're other than me. But no, left arm, you're not a part of it. Leave the heart to operate on its own. It doesn't want to keep me connected with it. No, they don't connect with anything else. So you don't draw the line at your skin cells, organs. You understand? You understand? That is interdependent. So then we go, okay, well, um are you and I intimate? We go, yeah, no, we are, because we're, you know, we're we're literally, we were twins, very interdependent. And I go, well, what about my other family? Yeah, no, that yeah, I'm that I'm they're interdependent. They're all definitely part of the web of my life, my dogs, etc. What about my friends? Well, yeah, no, they're I think you know that so where are we Well it doesn't. Well it except we do draw except people We do We draw an unnatural we draw a line where we just decide, but actually, really, if you keep going, keep going, how are you dis Because if if my friends' friends are or family are interdependent on them and I'm interdependent on her, then there's no totally there's no line. You can look at this in the opposite direction. You can look at this by going in. I mean, one of the things that blew my mind years ago, I might have even been at school, was a kind of a a microscope looking at where your skin meets the world. Yeah, it doesn't exist, it doesn't exist, there's no boundaries, I know it's just atoms, it's so freaking when you think of it like that. And that's true of us with any object in front of me. Yeah. If you went in at a microscopic level, you would all you would end up seeing is just atoms floating around. Exactly. I know, so there we go. So I mean it's a bit of a sort of mind game, but it's a really interesting way to just see if you just for a moment you can we can drop the otherness or the barrier and go, okay, well, this is this is all part of the same environment, it's all part of the same system. What what's my relationship with it? It makes me think of I think we've said this before, but the the millennial where we all thought all the computers were gonna planes were gonna crash out. Because you were working for a I was working for a tour operator operator, so you were so we were thinking like Planes in the Sky. Oh my god, planes in the sky. Yeah, so it was, I mean, I can't even tell you how much work we did. But it really is similar. And then honestly, when the clock ticked to midnight, I was like, oh. Yeah. It's like nothing, nothing happened. And and and thank God nothing happened. But I don't know if we needed to do all that work. It's so interesting, isn't it? And also that had a very particular deadline, which was sort of in our in our sort of Armageddon sort of you know movie fantasies that it's much better when you've got a deadline. Whereas, of course, with AI, we haven't got a deadline. There's no line, it's just sort of gradually happening. And so this idea of all AI is coming, and I'm like, you know, it's a bit like saying, oh, you know, the sun's, well, the sun's gonna rise, the sun's gonna rise, yeah, no, it's up, it's you know, we're in danger. It's up somewhere in the world, exactly, it's up somewhere in the world, exactly. It's always up somewhere in the world, exactly. So it's exactly that. It's not really even it's even a concept of something's coming. Like it's even that language, it's just I find it so interesting. Okay, so we probably need to start sort of wrapping up. So, first thing is check your narrative around AI. So if someone brings it up or there's an article on the news, just notice immediately do you go, do you eye roll, do you switch it off, do you have heart palpitations? Do you say AI's gonna kill us all? Yeah, exactly. Does it so so so notice? I mean, we'll we're laughing, I mean who knows? But we don't know. That's the point. That's the point. Exactly. Uh we don't know it's coming. Yeah. You might as well choose your narrative around it. Well, exactly. I think that's right, Penn. It's a bit like taxes and death, isn't it? You know, it's you know it's coming. So do we do I spend my whole life saying there's no point because all I do is pay taxes and I'm gonna die? I mean that that is true. I don't want that to be my narrative. No, exactly, and don't let it well, and it's and and actually it's letting it take control, isn't it? Uh so the second thing is what would be the path of least resistance? Yes, this is a biggie. So it's a biggie. So one thing is just a just a completely sort of open play. Can you can you bring up, you know, I don't know, Google it, whatever you want to do, bring up a free version, a free app, Chat GPT or whatever you use, and just ask it a question. Ask it a question that you sort of that you might ponder to yourself on a walk or something, and just just play. And the second thing is there's plenty of people out there who are offering how to write prompts. So you can actually cut you can pretty much copy and paste and adapt. So you don't even you don't really even have to learn to write good prompts. You can find those. So if you wanted to create a I don't know, a table, a schedule for your children's daily chores, yeah, and that's something on your to-do list that you want to pin on the fridge, you could probably Google how can I prompt AI to create a schedule for these tasks for my kids. Yeah, and it'll be a good one. And just to mention the Matt Berrison model, as we're now going to call it, um, if you're running an organisation or a team, find the person in your team or even in the IT department who's really good at this and basically get them to get everybody started. Yeah. By getting getting them signed up to a platform. I know lots of organisations have got their own internal platforms to keep data safe, and giving them one thing that they have to do. Because I think I think that, you know, saying to some people, just do this, even that can feel like I haven't got the time, that that feels too hard. But if you can get your sort of AI champion to make that path really easy for people, you just need to open the door for them. Yes, and I would say, even more than that, you could actually start. So don't make assumptions, you could actually start with asking everybody in your team, the or organiser or whatever, the question: what is the one thing, the one set of tasks or kind of thing that you do every week that if you didn't have to do it, if you didn't have to spend that much time on it, would really improve your week. And give that to your great person who's really good at prompts, yeah, and give the that they can build the prompt. Happy Christmas! I have just saved you three hours a week. So you can actually be really, really targeted. Um, I mean, there's a whole other piece on this, which of course we can go into maybe next week, that AI itself, of course, can analyse where you spend more of your time or where where you're less engaged, so you can actually use AI to help to help you use AI, you know. There's so many things now. I mean, somebody was saying to me the other day, you can put your basically you can connect a piece of AI to your email inbox and it will filter all your emails, give you a brief summary of the ones you don't need to respond to, and even give you offers of replies to the ones you don't need to. And that and and and that we're definitely going to go into more next time because that email sorting, I thought not only me, but Lisa, our fabulous EA, she would love me to have an inbox that's more than Do you know that again the fact that there are things that you can access quickly like that? Do you know what I would love? Eventually, I guess we will each have some sort of AI bot that will know everything about us. Terrifying. Well, yes, and although I've got you. Well, you got me. But um could my could could my AI bot speak directly to a waterboard or a mobile phone or an electricity board AI bot and just sort out. I mean, that is my dream. Utilities feel you. I'm just gonna say two words network rail. I live near a railway. You know how I feel about utilities. I hear you, Fen, I hear you, Fen, I hear you, absolutely. Um, because we so we're definitely gonna go more into this next time because there are so many pieces around this, around what AI how AI, I mean, AI is definitely that when we start to open the door, you think about writing to people that you don't really get on with, that you that don't write back to you, that you don't seem to be able to motivate or engage, there's a there's a space there. AI could work it out for you and and and change your experience even of human relationships. I mean, there's lots in. Yeah, there is, and we will also talk next time about some of the sort of genuine, the genuine fears and concerns about how our society, because of the way we are set up, because of our values, because of our belief systems, how AI in combination with that could lead to some problems, as as many changes in the past have as well. And what our responsibility is in that and and what is outside of our our individual control on that. So we'll definitely touch on that because there are there are definite concerns. And it brings me to a really fundamental, again, one of the kind of central touchstones in all of our work, in that whatever is happening, whatever is coming at you from the world, however other people are responding, who and how do you want to be in this? Thank you so much for listening to this episode of the 40 Human Being Podcast. We hope you enjoyed the show. Do take on board some of the insights, tools, and tips because every time that you try something new to get back to choice, you are making a vote for the you that you want to become. And I I love that phrase, Pen. I do too. And please do share this episode with somebody that you know would really benefit from the lessons and learnings we've been chatting about today. And of course, if you're interested in more from 4D Human Being, do get in touch. We run workshops, trainings online, in-person, conference events and keynotes. We've got the 4D on-demand platform for your whole organization. And we do have a free essentials membership where anybody can sign up for absolutely free to access some of our insights, tools, and tips. So do get in touch with us. If you'd like to hear more, we cannot wait to hear from you and to carry on the conversation.