Krome Cast: Tech-IT-Out

KROME CAST TECH-IT-OUT: How to Migrate from Legacy Systems, from the hardware layer through to the application layer.

Krome Technologies Season 2

In this episode of Krome Cast: Tech-IT-Out, we review the challenges faced with legacy system migration; looking at everything from the hardware layer through to the application layer.

We discuss how to successfully update IT systems, and the legacy systems problems that are often faced by organisations when embarking on systems upgrades and migrations. These include data migration considerations, such as using extraction transformation and load (ETL) tools, hardware migration plans, os migration, virtualisation migration and application migration.

This tech panel podcast features Krome's Commercial Director, Sam Mager, along with Krome's MD, Rupert Mills, and Technical Director Ben Randall, sharing their views and experience on how to migrate data from legacy platforms and what technology migration strategy is best to achieve a seamless tech migration path.

► ABOUT KROME: Krome Technologies is a technically strong, people-centric technology consultancy, focused on delivering end-to-end infrastructure and security solutions that solve business challenges and protect critical data. We work collaboratively with clients, forming long-term business partnerships, applying knowledge, experience and the resources our clients need to solve problems, design solutions and co-create agile, efficient and scalable IT services.

► KROME WEBSITE: https://www.krome.co.uk/

► SOCIAL MEDIA
• YouTube: https://www.youtube.com/@krometechnologies
• Linkedin: https://www.linkedin.com/company/krome-technologies-ltd
• Instagram: https://www.instagram.com/krometechnologies/
• Twitter: https://twitter.com/KromeTech
• Facebook: https://www.facebook.com/KromeTechnologies/

► CONTACT
• Telephone: 01932 232345
• Email: info@krome.co.uk

Welcome to Krome Cast, Tech-it-Out. I'm Sam Mager, Commercial Director for Krome Technologies, and on this edition of Krome Cast, we're talking about legacy system migrations, from the hardware to the application layer. And a slight change of format, we've beefed up the intellectual horsepower of this edition, and I'm joined by two people, namely Ben Randall, Technical Director, and Rupert Mills, founder of Krome and business partner. Chaps, good afternoon. Afternoon. Or good morning, I don't know how long it's taken us to set up, but we'll go with, afternoon. Afternoon. Let's go with that. Perfect. So, change of format, with the idea being that today, hopefully, we're going to dig into it a bit more technically than usual, based on some client feedback as well, people want to see more of Ben Randall, so we'll give the people what they want. Of course.[Laughter] And just looking at the subject matter, so we've recently done our Customer Satisfaction Survey, and feedback from that has given us the data that people are very interested in, migrations, specifically from kind of legacy platforms, be that, desktop platforms through to kind of big end infrastructure, the how, I think people have, potentially due to the pandemic, people have kind of got on with doing business, and not necessarily done some of the heavy lifting we'd have done in years before. Now, that's caught up with people and people looking at things like server refresh, migrations to the cloud, it could be platform, a new platforms, on-premise, etc, etc. But as we know, there's a whole raft of, of challenges that can come with that, and if we work, you know, I'll use my cheat notes. But if we're going to look at that from, from if we go top down, bottom up, whichever look at it, kind of starting at the hardware, because if we're doing a migration, or refresh, clearly, we've got, you know, the hardware, the software, the applications are intrinsically linked. But we sometimes had to look at this as independent parts to make migrations and so on work. Sure. Yeah, I mean, if we were addressing it for a client, and as we often do, you kind of look at the four different stages, you've kinda got the, the hardware, virtualisation layer, if it's relevant, the operating system layer and the application layer. And you're right, you've got to take all of those into account when you're looking at it as to what are you trying to do, which of those are you trying to get away from, if not all of them? So yeah, we can start with the hardware layer, it's as good as any. And there's a lot of changes in hardware going on at the moment, Obviously, you've got, a lot of migration to SSD going on at the moment. So a lot of new storage arrays and stuff, if you're sticking with on-premise, arrays are SSD based, you've got to look at the suitability for the, the stuff you're putting in it, the data you're putting in it, and work out whether or not it's going to fit on SSDs, or how the various different compression and encryption algorithms within the, or how the various different compression and data reduction algorithms within the platform will work with the data you've got. So historically you used to say, I need 100 terabytes of data, and we get 100 terabytes of storage. Now, what tends to happen is, you say you've got 100 terabytes of data where you actually need 20 terabytes of storage, or 10 terabytes of storage. And that's been around for a little while now, but certainly, with hardware migrations, that's becoming a bigger and bigger concern to make sure you size it appropriately, and then the chipsets are moving forward so fast from Intel at the moment, that actually you've got to look at what do you get out of the new horsepower, the new chips compared to what you got historically, and sizing those correctly as well. There's some, there's some horrible old chips around which you can accidentally buy, or you can end up with some with exactly what you need in the newer chipsets and a lot of horsepower, but you've got to do that sizing operation perhaps in more detail than you ever did before. I suppose also, you're looking at potentially if you're splitting that, so you could be on-premise to part on-prem, part cloud. Again, not all data is equal needs to live in the same place. And like you said, also that the the thought process there behind historically you've kind of gone same for same but a bit newer. But if you're splitting your datasets and you put it elsewhere, it might be do we need, that everyone's trying to push SSD, and let's not get away from that, but spinning rust is still available, and might that be a suitable medium for what you need if you're part putting into the cloud. Yeah and depending on the solution as well, you may get a lot more performance than you expect out of a traditional mechanical, spinning disk hard drive, you know, if you've got enough spindles, you can actually get some surprising performance. Yeah, and they're not, let's be honest, they're not expensive. Some of the vendors obviously everyone knows Dell, we like Dell, but the ME4 takes a vast amount of spindles and you get a huge amount performance for quite cost effective solution. But that's many you compared to a PowerStore full of SSD, with four to one data, now it'll be six to one with the next firmware, etc, etc. So I guess sizing that up. There's a certain amount you can look at with what you've got at the operating system layer as well, like data deduplication, which is now available in Windows Server, for example, that can actually you know, the, the operating system level, you can actually achieve some economies there from older servers, which you know, server 2012, you may not have had that, depending on your file system. So, you know, that can be part of your migration. Does, I kinda guess, important question on that, does that make some of the functionality that vendors have built into arrays obsolete? because obviously dedupe and compression has been a big, as you've just said, it used to be you buy 100 terabytes of disk for 100 terabytes of data, or 130 terabytes of disk for 100 terabytes of data. And now you can buy less because obviously, we fit more in, but that's because dedupe and compression has been done at the array primarily. Well, the dedupe and compression have been in the arrays for a long time now, the difference has been that historically, it's been optional. So you'd be able to say, actually do you or don't you use it. So you know that in a worst case scenario, as you say, 100 terabytes, 100 terabytes of disk, but you can then turn on the dedupe and compression and get more for it. Now, when you're getting the SSD arrays that are all flash, basically, you're getting to that point of, you don't have an option, it's on by default, and you can't turn it off, and you have to rely on that because you can't price 100 terabytes of flash, compared to 100 terabytes of spinning disk and expected to be anywhere in the same market. So but they rely on that technology to bring it in. So that's important. But yeah, you're in terms of the operating system, making some of it redundant. It's been doing that for a while now, and there's that choice of do you do it in the hardware vendor, do you do it in the operating system. And there's, there's different places to look at what you do there. So for example, even with things like vSAN and stuff that's been around for ages, you've been looking at that the virtualisation layer, so it's in between the two. So you've kind of got different places to do all of that, and that's part of when we talked about at the beginning, planning exercise, getting that planning exercise right, as if you do your planning properly, you can work out where you're going to use those, and also, which of them might conflict with each other, because some of them, if you turn one on, it won't, you won't see the impact because you've got the other one on or whatever. And you can only dedupe once. Yeah, and you've got to look at what you're doing there as well, or, alternatively, whether or not it's going to hugely impact performance, if you turn on dedupe in two different places, are you going to suddenly see the performance drop through the floor, because it can't cope with doing it in both locations so. Before we jump into virtualisation though, a couple of things I know we've done, you guys, not me, you guys have done a lot around this, but it's kind of the hardware sizing, pre-migration into cloud, but also, what do we call it, the kind of compatibility bit, so if we're looking at a legacy subsystem A and to get it into new subsystem B, that's not necessarily, to coin the phrase, "next, next finish". But it's not as easy as just drag and drop the data, and we've seen some of those challenges. Well yeah, it depends. In some cases, there may be a supportive migration path, you know, for example, different SANs from the same vendor may have a direct migration without minimising downtime, maybe to replicate volumes over to the newer storage. And also, other operating systems may have a direct migration path. So it's something to think about whether you're going to have to do a forklift upgrade basically, or whether you can do something much more seamless, and that's, that's a consideration for sure. I'll chuck the pre-sizing before cloud one, your way, if you don't mind. Yeah, it's just as important when you're looking at lifting and shifting an environment, not only just look at what hardware you got on-prem, but look at what hardware your specification, you're putting it into the cloud. Cloud is charged on metered usage as a rule of thumb, And if you're using the wrong size virtual machine, if you've got a historic virtualisation estate on your on-site, and the hardware is over spec'd, you can give away RAM, CPU counts, etc, to machines, because you don't care, you've already paid for the hardware. As soon as you put it into the cloud, if you put something in, that's five times the size it needs to be, you'll be paying five times as much for it, even if it's not using that performance, because the virtual machines spun up at that rate. There's the migration to cloud technologies, which is different that comes further down the stack, but if you're just lift and shifting the virtual machines, you've got to look at making sure that you potentially resize them as part of that lift and shift, so you don't end up paying a huge bill you weren't expecting. Yeah, obviously, with with migrating to the cloud, you've got that ability to change instance type, it actually becomes very much easier, thankfully. But that is a huge consideration, the price, the cost of that and looking at that. Because we've seen before when people have gone wholesale, kind of as we are into the cloud, and then the unexpected bill arrives, and yet we're seeing people still will pay that bill, and won't actually take the time to look at just by adjusting to the right size, how much money can be saved. Yeah, absolutely. And that, there's various software we can run to do that automatically on a regular basis, as we've talked about previous podcasts, but actually doing it the point you, you migrate as a baseline is a good idea. Yeah, that makes sense. Okay, so I'll shuffle you on, obviously you're desperate to talk about virtualisation. So I'll chuck that one back in your lap as well. Not quite so desperate, but yeah I guess, so I mean, that the big issue actually is, that lots of people are sat on a virtualisation stack for a long time and said actually it's working, it's good, etc. you patch it, you move it forwards, but it's amazing the amount of virtualisation stacks out there that aren't patched properly, because people look at patching their OS layer, but not the virtualisation, and now all of a sudden, ransomware vendors have started targeting the virtualisation stack, which is pushing people into those upgrades. The problem with that is that, you've got supported hardware platform. So and you know, hardware compatibility in terms of - Absolutely, yeah to keep up with the latest release, of well of any operating system, but yes, specifically the hardware stuff, you need to have hardware which is up to the spec of the latest VMware or Hyper-V release, so yeah, that's important. It's driving off that previous hardware conversation, is actually if you want to go from let's say, VMware six, five to seven, drives the hardware migration at the same time, and your hardware isn't on the hardware compatibility list, you may find that actually it so you've got to think of actually checking all those compatibility checks before you go and do that migration, and then again, supported migration paths. So we've bumped into quite a few times, with people saying, oh, we're going to upgrade VMware, actually, the easiest way to do it and historically a lot of the time, the easiest way to do it, we'll say, we'll just build a new vCenter, connect everything to it. One of the big gotchas at the moment is people with backup systems where you say, actually, the backup system now ties into your virtualisation, rather than your operating system, and if you just do a wholesale replacement of your vCenter, all of a sudden your backup footprint doubles, because historically, you've got that issue of your backup is there and it's versioned, if you back it up from a new vCenter, it sees it all new backups. So your backup footprint doubles. So again, the gotcha is, we've got a cloud backup service, suddenly, we've doubled the amount of data we're consuming in the cloud backup service, and by the time you've done that, it's too late to think about it so, and we've seen this a couple of times haven't we? Absolutely yeah, and it's something you have to consider, and so actually upgrading the vCenter, which is a bit harder than just building and spinning up a new one is something you have to seriously consider if you've got that situation. Yeah, you've got to take those things into consideration and work it out and some, but yeah, the virtualisation stack is considered an easy option, but actually, you've got to give a bit more thought these days because of all the integrations with it. Yeah. Just on the back of this, this a really easy question for me to ask, it's either a no, and I'll be quiet, but you're talking about backing up previous versions of VMware, and so on and so forth. If you're restoring to a later version, or newer version of VMware, is there any issue pulling that from a backup and then mounting it? If it thinks it should be mounted to 6.5, now you've got version 7, etc? That's an interesting question actually. [Laughter] I think that the, it depends on the version, I think that there is backwards compatibility, in the newer versions. I don't know how far back we can go, but there's certainly, for a previous version or two versions, I think you're okay. But you're essentially restoring the VMDK files or the virtual disk files, and so they remain compatible. But yeah, that's definitely something to consider, if you're restoring a very old backup, like an archive, you know, that will be the situation where you start to run into problems. Also, what operating system is supported to be virtualised? So if we were to go way back, you know, a very old version, like NT or something, is that supported anymore on your latest hypervisor stack? Going quite way back to NT. Yeah, but the potential is there. I mean, we have had clients who've got, had systems they wanted to run on NT. So, you know, legacy systems clearly, which are very ripe for, for migration. Yeah, I mean, around this, when we get to the, the application side of this, we had one particular client who's running systems that date back to 1978, and I remember from, from the discussions with them, and actually, they couldn't get off them because they couldn't get the data out. And there's a, when you look at that conversation piece is, is where where do you, which, where do you lift that and move it forwards? Yeah, I guess that's we're talking about migration, but when we get into that sort of stuff, we're talking kind of active archive, archive conversation a lot more, or potentially, ELT, that sort of stuff, but it's, it's, people wouldn't necessarily pulling that from a backup. So that wouldn't be the use case for it. Potentially, it's been painful in conversation for no reason, Ben.[Laughter] The one thing that you can think about with the virtualisation or virtualised backup restore is obviously, we can use that as a cloud migration path as well. So generally, you can restore back to a different platform. It's just, it's the devils in the detail, as Ben said. Yeah, so I guess if you're talking, you're looking at either it's, you've got, a kind of certified way, within the vendors platform. So you're going from a Dell this to a Dell that, or NetApp this to NetApp that, it's nice and easy. If it's, if they're different vendors that we can do things like, so VMware could do that, or from backup - There's different options available to it. Or Hyper-V to VMware, VMware to Hyper-V, depending on what the right choice is going forwards again, because vendors are changing licencing models and all the rest of it. So when you're moving forward to that new platform, you've got to look at, again, right sizing the virtualisation infrastructure as well as the the hardware. Yeah. Okay, fair bit for me to try and digest and pretend I understand. So moving up stacks and we move up from virtualisation into the operating system. There's some some interesting challenges there that we've got facing us with, obviously, as we just touched on some legacy operating systems, also I know Windows 11, etc, is on the horizon, there's some interesting stuff there. Well even closer than the horizon, it's basically here, isn't it? So, yeah, there's hardware considerations with Windows 11. The requirements.. Let's drill into that a little bit, as some of that was news to me. Yeah so, we've got like the requirement for the TPM 2.0, which is Trusted Platform Module, which is the security chip essentially in the, in the, within the hardware. Version 2.0, was released in November 2019. So it's quite recently, really. So if you've got an estate of laptops, for example, that's older than that, then you may need to see if there's, what can be done about, you know whether you need to, whether that's going to be compatible to be honest. But this is a TPM module, which I thought was a standard thing, it's a cost option with some of the vendors that we're looking at. So it could be that and that.. It's not guaranteed to be, it wasn't required in Windows 10. And then that release date, I hate to remind everyone, but 2019 backs us right back into kind of just pre-pandemic, and we all know that the first few months of 2020 was a complete bunfight for everyone trying to grab devices, laptops, let's all work from home. So there's a raft of devices out there, that we know people grabbed at anything, you know, the spec wasn't the most important thing, it was just having a device and everything went you know, there's probably a lot of devices that do have TPM, and some that don't. So it will be consideration for certain people that want to now move to Windows 11, and leverage that, that potentially are kind of bound by the parameters of the hardware that they've purchased, unknowingly, I don't know. I mean if they're going to sweat that asset for five years, you may be looking at a 2025 release, or 2025 migration date, which doesn't leave you that long to get off Windows 10 to be fair. Not for some people. No, indeed. So, but, but yeah, it's, as Ben said, the TPM stuff, the whole idea of security in the operating system, Windows 10, you could turn BitLocker on and off, for example, the TPM piece was optional, Windows 11, they've made it compulsory to make it a more secure operating system, you can understand why because it drives the whole market forward and will drive forward the reputation of the operating systems being more secure, etc. It just means that you're locking out all of the people who traditionally have said I'll, sweat this asset for five years, seven years, nine years in some clients cases. Fifteen years. Yeah, exactly. I mean, in fairness, we've had pretty good mileage out of that with the Windows 7, 8, Windows 10 generations have actually been quite lenient really, if I look back in our history that the supported hardware, your laptop when it went obsolete very quickly, I think we've actually got quite an, as you put it, sweated out the asset for quite a long time. You've done quite well out of that. But now it's that, that time is a clear delineation, we've got an object, a piece of hardware which probably, or may not be in there. So yeah. I mean, the minimum specs haven't jumped that far again either have they, because you're still talking 64GB for the OS, and you're talking 4GB of RAM, so actually.. Is it really 4GB of RAM though? Let's be honest. No, it won't be 4GB of RAM - Minimum spec. Yeah, yeah. It'll run. Barely, barely. If you run Teams on it though, that'll fall over fairly quickly. Yeah, but I mean, we all know that minimum specs are not to be.. Don't aim for the minimum spec. Yeah, exactly. [Laughter] Don't aim there, unless you want your user experience to be minimum spec. So but the, the bottom line is that the minimum spec hasn't really changed that much, in terms of you can still run it on the older hardware, in theory, the one that will be overlooked if people go, oh, yeah, we can roll this estate out, and unless you've got a way of, for example, if they need a firmware flash to bring up the latest TPM firmwares, again, for security reasons, do you have a method for flashing your entire estate's firmware? Because most people have got a method for rolling out a Windows update or similar, and if they haven't, they should have. But on the, on the other side of things, do people have a method for rolling out a firmware update? They're not always that simple. So there's, when you're doing that migration, you've got to think about, are we going to need a manual touch on those machines? or are we going to need a large wholesale hardware replacement, as you said? The people that snapped out and bought quick what's on the shelf, can I have 1,000 of those to get them rolled out to our users are they suddenly going to find that actually, they need to replace that 1,000 machines, which may mean an earlier budget cycle, or it may mean delaying the upgrade cycle. So but certainly, it's a consideration. There's DirectX 12 as well isn't there, the graphics card needs to be capable of DirectX 12 or above, as it comes around, so. Is there any other thoughts on kind of the operating system level and considerations there? Because I'm quite keen to switch you both on to talking about the application layer and getting into that. No, I think the, we've already touched on the the the, the hardware for the for the laptops, and so on, there's certain considerations about what operating system can support in terms of disk volumes, and so on. There are some things like you may need to consider on Windows Server, for example, if you're using ReFS volumes, that if you are considering disconnecting drives and connecting to other servers, it's strictly speaking, not a portable drive format, so it's okay to go up a volume, but if you upper version, if you go back again, you won't be able to read the data. So that if things like that you need to, you do need to consider that, but if you're going upwards, your backwards compatibility is good, but going back the other way, not so much. Try to avoid it. Yeah. Yeah, obviously server versions moving forwards, it's it's going to become, in the future largely similar to what they're doing with their desktop version, I'm sure. So it's worth considering whether or not you buy your servers with the TPM chip in them at the same time these days, if you're doing server upgrades, because it's, it's, it's, yeah, it's not a lot of money. I think it was 150 pounds extra or something like that, when we did it recently for a client that didn't have TPM in the servers, you can, you could actually buy them in the servers as a plug-in module later. But it's worth worth considering that that might become part of what you're trying to achieve in the, in the long run as well. Okay, so we're going to go from server, my brain naturally leads us up into the application layer, and we've seen over the last couple of years, challenges with people, I guess, having fun with vendors around some of the commercials that will cost them to migrate some of this data from either live or legacy platforms, and I know Rupert specifically you've had, a bit of work in this area. One or two. Exactly. It'll be good just to get, get some some information out of you, the sort of thing, sort of things you're seeing, challenges people are facing, because obviously it's, it's not a one-time thing. We've had more than a few instances, we've helped people, but it's an ongoing challenge that people are facing. Yeah, sure. So I mean, you've kind of got different things. So you've got the standard applications that people will look at. So they'll talk about things like Office and stuff like that, we go through that all the time of how you're going to upgrade to the latest version of Office, what you're going to do to upgrade to the latest version of whatever it might be Sage, SAP, whatever you're, whatever you're doing on the client-side, and there's the client upgrades. I mean, we talked about earlier about operating systems, there's operating newer deployment mechanisms. So a whole lot of Azure joined stuff, such as Autopilot and stuff, getting operating systems out, deploying new apps, while you're doing that, etc, can be built into those. So there's, there's the client-side of applications to get out there, but in terms of the actual thing, that's really locking people in on legacy platforms, generally seems to be data. So if you've got a major issue with an old application, a legacy application, you need to get that data out. It's not uncommon at all for vendors to play the, well, there's no way to get that data out, you need to do this, and you need to keep buying our platform for the next 25 years, whatever it might be. And more often than not, we've done a lot of work in the ETL space recently, sort of extraction, transformation and load, for I'm glad you told me. for the sake of completeness. Where you basically saying, actually, let's have a look at that system, and how we can get the data out. We did one just recently where, one of the large database vendors, was suggesting that it would take, well, they'd need to spend, I think it was a quarter of a million pounds on upgrading licences and keeping the system ad infinitum, and we said that we thought we could get the data out in about 20 days, rather than doing that, and then give them the data. So the client, the data, so they could do something else with it. We propose the three day POC, and actually we did the whole job in the 3 day POC. So it was kind of a quarter of a million pounds. A quarter of a million pounds, that's obviously a significant saving. Yeah, absolutely. I mean, it was it, but it's not uncommon. We see it a lot in the space of trying to get stuff out of legacy applications. As a rule of thumb, vendors, and I don't want to tar them all with the same brush, but as a rule of thumb, vendors will often try and keep you on their platform by pointing out how hard it is to migrate away. Our job in a lot of these situations is to actually look at, if the right thing to do is to stay on that vendors platform, stay there, upgrade it, bring it in, in that whole cycle that we've been talking about and get them up to the current version, and then you go back through the usual cycle of what are the supported hardware versions? What are the supported software versions? Operating systems that supported, and then you put the latest application on there. But actually, in a lot of cases where people are talking about like migrating away from legacy systems, which is what came out in the customer satisfaction survey. They're talking about, how do I get rid of that thing, that's 20 years old, ticking in the corner that I don't need anymore? Yeah, paying a fortune for. Yeah, we see it a lot in the M&A work, Ben and I have just been on a project where they're moving from a very large organisation, into they've carved off a small, small piece of it, and our client has bought that small piece, and actually, we're looking at getting involved in the various different carve outs of the system's going there, there's a whole bunch of stuff that's not needed from, for example, a very big SAP installation. Now, they've got their SAP expert working on that, but that's just one example, where not taking all of that across is saving them millions in SAP licencing costs. Again, we did a job recently with SAP where you needed the piece of data out of it, not the legacy SAP system, and actually, we were able to extract the piece of data using the APIs that connect into SAP, give that back to the client and save them the need to licence SAP for some time in the future, all those components of SAP. You know, the actual volume of data might be considerably smaller than the original whole system you're looking at, you know, so in terms of even the volume of data you're having to move, it's comparatively small. It's just understanding what that is, working on the application layer to extract it. That's it, and normally, it's the compliance or the data governance teams who are making you hold on to that data, for valid reasons. It's, okay, you need to hold this for seven years and be able to access it. That doesn't mean you've got to hold it in the source system. So a lot of the time you can say, actually, I've still got the data, I can still pull back whatever we were meant to keep for regulatory reasons, but I don't need it in the source system. Yeah, comes down from kind of where we started on this, going full circle. We're looking at types of data, not all data is equal, isn't it? So looking at that information you need, potentially you need some of it, but do you need to be running a very expensive system, just in case if we can transform it and move it somewhere else, we can still get access to the raw data. We don't need all the the trimmings that we're paying a fortune for, that weren't really doing much, other than costing us an awful lot of money. Yeah, especially if it's, if it's, let's say, it's data that you just going to refer to essentially, it's read only. Yeah. So you're not really working with it in the same way as you were, you're just referring back to it, then there's a whole load of functionality you don't need in the system you're migrating it to. Yeah, absolutely. We had a very interesting conversation recently, with a potential client where they've got an AI platform, they want to feed a load of data into, and actually getting access to all that legacy data will enable them to feed their AI and let the AI learn a lot faster by accessing all the legacy data. And it's question of someone to assist them with that piece in the middle to get the data out of the legacy systems and feed it into the modern systems, there's a whole new take on it as well. So but it's it's migrating away from those legacy systems, and doing either keeping the data for just compliance reason, parking it away somewhere in read only as Ben said, or saying actually, we want to feed it into something new and modernise the whole system. Or do something tangible with it. Yeah, exactly. Brilliant. So, we've done all sorts. And we should continue to do so. Yes.[Laughter] Well, listen guys, thank you very much, it's been really interesting today. I'm sure we'll do more of this. Hopefully, we can get some more, I guess, deeper technical information, than certainly, I bring to the table. So thank, I'll start again, thank you very much for your time. Thank you for having me. Yeah, thanks. Cheers guys. And thank you for joining us on this edition of Krome Cast, Tech-it-Out. By all means, leave feedback in the comment section below and if there's anything you'd like us to cover in future episodes. Remember to like, comment, subscribe and share, and join us again on Krome Cast, Tech-it-Out.