- Roger 4807, approaching runway seven bravo. - The Air Force has announced the creation of a new information operations technical training school. - So in our business, national security, where our job is to fly, fight, win, we'd better be masters at this game of innovation. - Air Force basic military training has an updated curriculum with a new focus on readiness and lethality. - This is the Developing Mach-21 Airmen podcast. (booming) Hey everybody, welcome in to Developing Mach-21 Airmen episode number five today, and we're talkin' about all things Pilot Training Next 2.0, so it should be a great podcast. I think you're gonna love it. Thanks for the subscribe, stream, or download. However you might be listening in today, if you get a chance to throw us some stars or even have a little extra time and want to give us a review, we certainly would appreciate you doin' that as well. We love all the feedback that we've been gettin' so far here on Developing Mach-21 Airmen. My name is Dan Hawkins from the Air Education and Training Command Public Affairs Office and your host for this professional development podcast dedicated to bringing total force, big A Airmen insight, tips, tricks, and lessons learned from the recruiting, training, and education field. Tons of great stuff today that we're gonna talk about with the director of Pilot Training Next, Lieutenant Colonel Paul Vicars. He goes by the call sign Slew, and he's been with the program since day one, literally getting the charge from Lieutenant General Kwast to build this program, and he's seen it from the days when they were takin' the VR simulators out of the box and havin' to put 'em together right as class was starting in version one. This second iteration has a few differences from the first cohort, which graduated late last summer with 13 new pilots earning their wings. One of the major differences is the fact that they aren't trying to build as many processes concurrently at the same the time that they have the students on the ground, which they had to do in version one, so the flow of the class that started in mid-January is going a little bit smoother. Lieutenant Colonel Vicars also spends time on the focus areas for this edition of Pilot Training Next, including innovation, which also includes the art of failing forward, and this shows that the Air Force is willing to assume reasonable risk in that innovation process, so he's gonna talk a little bit about that and also their partnership with AFWERX. He also talks about the scalability of this program in terms of how they create a model that can be replicated on a much larger scale across undergraduate pilot training, including what elements of the training can contribute to the larger Learning Next initiatives in AETC, and he also talks about the use of big data to help the training process. One of the more revealing aspects of our conversation with Colonel Vicars was how much the PTN learned in iteration one about what not to do with data, so this second iteration, they're hoping to get more use of the data that they're able to collect and what the analysis of that data reveals. Of course, the use of immersive technology is something that we spent a great of time discussing, including how the VR simulator events will be monitored by artificial intelligence for grading and data tagging. Additionally, the focus on student-centered learning continues, and the push to give more control of learning over to the students, both of which are in line with the AETC strategic plan. A few of the other things that we get the chance to touch on during the podcast was the talent selection process, which included cadets at the U.S. Air Force Academy in Colorado Springs going through a distance learning process and why enlisted Airmen are part of the course at Pilot Training Next. We also discuss the joint feel with two U.S. Navy students in attendance with this version two as well as the international flavor of having a Royal Air Force officer from the United Kingdom in the mix as well, so away we go. Episode five of Developing Mach-21 Airmen starts right now. (swooshing) So, Colonel Vicars, just tell us a little bit about yourself and how long you been in the Air Force. - I enlisted in 1994 as a intel Airman. Got the opportunity to commission. I did that through the SOAR program in 1999. I flew F-16s for a few years. My last F-16 serve was in 2006, so it's been quite a while. I know what they look like still, but it's been a while since I've touched one. I flew T-38s as an IFF instructor, Intro to Fighter Fundamentals, from 2008, 2007, 2009 at Randolph, just school staff work and the like. Came back, and I was a T-38 squadron commander at Vance, so got a fighter background. I've got quite a bit of experience in the UPT and IFF, Intro to Fighter Fundamentals, again. Environment as well, so I've got a lot of time teaching. - So, you end up with a big background, like you said, in the UPT environment and in AETC, so how did this job come about for you as being the first director of Pilot Training Next. - It was, I think, more fluke. You know, everything is most of the time up to luck and timing, and all you can do is be the best version of yourself so that when those opportunities arrive, you're a viable candidate, and I think, more than anything else, really, I just graduated Air War College and ended up on General Kwast's inbound list when he was the Air University commander, and he was, I think he knew at the time that he was going to take over AETC, and he wanted to look at how to innovate pilot training, so when I basically moved in right after Air War College, within a month and a half or so, he asked me to start looking at ways to innovate pilot training, and there was some already initial work that had been done by a few folks that he had tagged from SAS, and it started from there. Market research began in earnest in late June, early July of 2017, and we were moving into Austin in January 2018, within six months to stand up a class and start training and using the new technology. - It really is hard to believe, but now we're already starting version two of Pilot Training Next, but just to go back in time, obviously a lot of work to make Pilot Training Next version one happen. When you first started Pilot Training Next and got your marching orders from General Kwast, what were those marching orders, and then what were the results of that first iteration of Pilot Training Next? - So, I think the words that General Kwast asked me, standing in his office at AU in late June, early July, I'm not sure of the date, was Slew, I want you to run my pilot training modernization strategy. Yes, sir, and when I walked out the door, that was the big picture guidance, and we had talked several times about what he wanted to do with the new learning methodologies, the new technologies from VR to AI, and different ways to analyze and look at data, so we had some big picture ideas about where we wanted to go, but to be honest with ya, my biggest hurdle early on, and I think what stressed me out the most, was figuring out how to spend government dollars. These are taxpayer dollars, and that's all important stuff. You don't wanna take that lightly, so moving out in an aggressive fashion to get at the boss's intent without breaking the law was my biggest learning point, so once we had figured out the best way to get money on contract and execute, then a lot started happening really quickly afterwards. They would leverage a contract that gives us a lot of flexibility and time and materials, and how we can get unique resources and skills and whatnot in and out of our hands and for different use, and we can break stuff and get rid of it and buy something new and start this process of iteration, so that began in earnest in January, when the contract began and we were basically standing up in the facility, under contract, and students showing up, instructors showing up, and tech showing up within a month, so a very, very early perception of risk for me was that I had students, instructors, and tech all showing up at the same time, where if you did this, you know, planned it out, you'd have the tech show up. You'd integrate it. You'd make sure it works, then you'd have your instructors show up. You'd teach 'em how to use it and get 'em bought into the idea of what we're trying to do, and then you'd bring the students in. Well, when the students walked in the door, the instructors had only been here for a couple of days, and there were still boxes showing up, so it was very, very aggressive, which was our primary success metric is just keep goin'. Press hard, and the students'll let you know if it fails. If they can't get it done with where you're at, then they'll let you know, so keep at it, so they took us a lot further than I would have ever expected, given the fact that we didn't have but one simulator put together. We call 'em immersive training devices. When they walked through the door, that's all that we had, so they actually were literally building the tools by which they would innovate pilot training with their own hands and the screwdrivers and whatnot. Puttin' the chairs together and pluggin' the USB cords into the computers and whatnot, so they were doin' that on their own. - And I think that's really what makes it incredible. The amount of learning that was able to happen in such a short time, considering that wasn't all they were learning. - No, they, so it's a very frustrating and just challenging environment, where we have a pretty quick procurement turnaround, within 30 days from ordering something, we have it, and that is pretty unique, but when you're, as a student and an instructor trying to learn in this environment, 30 days is a long time, so the problem that you identified three weeks ago is still there. It still persists, and the tools were in no way optimized for what we were asking. The T-6 model that we were flying was sub-optimized. It would stall. Correction, it wouldn't stall. It would just go right into a spin. Once you got to the stall indication, it would just spin and fall out of the sky. The version of VR that we were using, the resolution just wasn't that good when we started out, so they had to overcome those challenges. The tech was unstable. Our AI, we never really got to use during version one, so we put a lot of effort into making sure that it's going to be much more useful during version two, so it was very, very frustrating throughout, and to be honest with you, that's what innovation actually looks like, is on the edge where things don't work, and it's always broken, and finding what the solutions are, and our philosophy is the best people to determine what right looks like are the instructors and students, the people that use the tools directly. It's not me as the staff officer that would determine the right use cases or to optimize it. If you wanna use it in the best way possible, the users, the end users, are the ones that'll help ya sort out what right looks like. - And it's interesting because, really, that focus in PTN is on how Airmen learn, not necessarily what they learn. That was one of your charges, and exploring technology and how that technology can produce better and faster learning, so what did that look like in PTNV One, and what are the lessons that you carry forward now, moving into iteration two? - So, we have several different major learning points that we draw out of version one, but to highlight your first point, is how Airmen learn at a larger level, not just how they learn in the pilot training context. That's something that we've taken pretty seriously. We put together a model that would be the way Pilot Training Next understands individual Airmen learning, and we call it the Three Loops Model, where you close an experiential loop, so any time someone engages with the content, it tailors towards them in that moment. For us, it's a matter of changing the environment, optimizing it to keep the student on an optimum learning curve while they're in the simulator, and that work is ongoing, and we've closed the loop, but it's not optimized yet. The second loop is to create an adaptive training system so that their next events are based on their previous performance, not a, what we would consider a syllabus. That's not preplanned. It's based off of previous performance, and then the third loop is to take all that data at the end of the course and throughout as well and feed that back into you assesions system. That's all built on the foundation of excellent teachers, so you need to make sure that your teachers have a solid understanding of individual progression and how to optimize themselves to help students progress at their own pace, and then General Kwast's fifth priority also fits in there, is training context, so if there's opportunity to build a training environment where you recognize, I'll say the nature of war in that environment. That it's competitive. That you're not gonna be just given things. You have to take 'em, and you can't take much for granted, from your communication channels to your access to other resources. It's gonna be a competitive environment, and you have to learn in a context that those things are not always gonna be available to you, so that's how we think about training in this environment. The first big learning point was we drew out the three loops model. The other one is that VR is very capable, so a couple of the studies that we did during version one. We had a team from a company called Apitma that works with AFRL. They do a simulator fidelity survey and study, and despite all the complaining, I'll say valid, legitimate complaining, that the students did during version one, and that's what we hired them to do, is come, complain about this tech. Break it and point out all our failures in building this environment. All their complaining, the feedback that they gave in this survey was still overwhelmingly positive, so despite the fact that it was not optimized for what we wanted, what they needed, they still found great use out of it. We also had IDA, the Institute for Defense Analyses, look at our data, and they asked the question does sim performance transfer to the aircraft, and basically from iteration one, again the simulator, not optimized for the student, we saw that simulator performance predicted flight performance at least 70% of the time, so I think that's pretty positive, so the simulator has both subjective value, meaning the students liked it. They saw that it was a useful tool for them, and it has objective value, meaning that it also does prove beneficial, it did prove beneficial, as far as predicting student performance in the air. - Now, just from a learning perspective, from a layman's term, who may not be familiar, as familiar, with pilot training, is that something that can translate when you're in the traditional UPT model with the sims? Are they able, is that kind of 70% number, is that comparable, or what does that look like? - As far as, it's a different logic, so I don't know how it would apply in the legacy unless you accept the same model that we're doing here, which is train in the simulate, train in the immersive training devices, and then validate in the aircraft, so that logic of progression, I think, is, makes that question they ask, does the sim transfer, particularly appropriate to us. It wouldn't necessarily transfer. That said, the methodology, the math that they did in the background, there's a lot of different use cases for that process in the legacy system right now as well that I think we could really leverage in the near term to help flight commanders and squadron commanders and OGs and wing commanders make decisions about Airmen, with Airmen, about their future. - Now, in PTN Version One, there was an issue, a little bit, with, you had some unique time constraints that kind of hampered your processes a little bit, and obviously, you want quality as a constant, time as a variable. That's one of General Kwast's longheld, industrial age paradigms that he would want to break. Can you kind of talk through that and what you're doing now as opposed to, in version two as opposed to version one? - Right, so using the Google model of innovation, where you set something out from the current culture and let it develop its own culture and learn within itself what these things, the tools and the technology can do. Since we opted for that model, that requires us to be TDY. Doesn't mandate it, but it allows us the most flexible. We don't have to disrupt Airmen's lives as much by piece testing everybody out, so we opted for the TDY model, which puts a unique constraint on us of 179 days. That said, during iteration one, we got 13 of the students across the line in 179 days, but I think we sacrificed a lot to actually get there. We really pushed hard late in the game, and everybody worked very, very hard to cross that line, so it almost added extra rigor that would not be, in, I'll say, any type of model that transfers, that constraint would be gone, so what are we doing in version two? I think that we're gonna try to make early decisions about those who would require, perhaps, more than six months, and piece test 'em down to Randolph to participate in the stand up of PTN as it continues down there, so those that we think can accelerate and get done within six months, then we'll keep them here and press to get 'em done, and those that look like they'll take longer, then we'll move them down with the goal of actually seeing how long, you know, if someone needs to take 12 months, then we'll leave them for 12 months. If they need only eight, then we'll get 'em done in eight, so we don't have that kind of false, imposed 179 day constraint. It's just, it becomes an administrative factor rather than a training factor like it was in version one. - And so really, that kind of talks to time, which is one of the paradigms that General Kwast wants to break, but it also talks to that the students in a learner-centric environment control the learning. - Absolutely, so one of the challenges that we have is we want to structure this environment to measure competence, not time. Right now, within emails that are flowing through the pipes within AETC and the pilot training bases, we've actually laid down student numbers all the way out to, probably, 2023, beginning class and graduating class. Those have been laid down with the five year pump cycle. To pull away from that, individualized learning, it creates a little bit of chaos, right? People start when they're ready, and they graduate when they're done, so you need to create several different mechanisms to pull order out of that chaos and be able to predict student performance based off of early progression and start doing those correlations that can help you understand when people are gonna be completed and the directions that they're gonna go. I think the data potential as well as the tools that do data analysis help us move that direction. Also, you need to structure the data to understand competencies, and that's not necessarily where we go right now. The legacy system, you get about 80 hours per platform. You know, 80 hours in a T-6, and then 80 hours in either the T-38 or the T-1, roughly, and where you have that time, that's, equals competency in some fashion. That equals experience. Also, you have on the other side a specific number of tasks that people need to get up to a specific standard and perform at that level, so those two things give us confidence. When really, what we want to measure is not just task performance compounded, added up to this total number of time equals success. What we want to understand is actually the fundamental competencies that individuals have. How are you at spacial awareness, and how good are you at spacial reasoning? Task management, and all those things, we do measure that quite a bit, but we only indicate, really, failure, and we don't actually look, necessarily, at how well and how high a person can go and train to some of those things, so we're hopin' to actually start focusing in on some more fundamental competencies and letting people finish this program based off of what that data suggests. - And so, kind of transitioning, but not really, talking about data, I know biometric data was kind of a hope for version one. It didn't necessarily pan out the way you thought it would, but moving forward, again, yet another way that you're trying to be in line with General Kwast's strategic vision for the command, using big, data-driven decision making to kind of shape the, or reimagine what our training pipelines look like. - So, it would be a disservice to the role that General Kwast has given us if I didn't talk about our failures, right? 'Cause that's our number one job, is to get out here and fail, so where have we failed? First is in our data in version one. We learned more about how not to do data in version one than we actually learned from the data of version one. There's still a lot of analysis to be done, but I think there was a lot of learning. If you think of a true experiment, where you generally structure an environment around your sensors to make certain you have very clean data. Well, what we're trying to do is build a training environment and structure sensors within that training environment to collect data, so it's a different mentality, so we're workin' our way, iteratin' through a variety of sensors to help us understand the data. Biometrics, I'll say that there're still tons of opportunity for failure in that space. First of all, collecting the data's not easy, and then I think there's not, there's a lot of opportunity for correlating performance, and there's not a lot of agreement on how best that works, so there's several different ways we can go. Again, still lots of opportunity for failure in that, so biometrics is a big challenge. Making it accessible real time is the biggest challenge that we have, so we're collecting the heart rate data, and we're collecting cognitive load and that kind of thing, but being able to use it real time to be able to iterate that training experience, like I talked about earlier, or provide it to the instructor so we can see, actually, how hard the student's working. We've always leaned on some tried and true things. Are they missing radio calls? How hard are they breathing? All those things, and those are all suggestive of a, pretty accurately suggestive of how hard a student's working, but we can know a little bit more about actually, objectively, how hard a student's mind is working. Don't know where that data's gonna take us yet, but we're trying to collect it right now. The other thing on the data side is that we lost a lot of data in version one we could have been collecting that we just didn't or thought we were, but due to a glitch in the system, we weren't collecting, so we've made a lot of corrections in that. We've actually built dashboards to help us see when the data's coming in and all that, so we have a lot more confidence in version two that, not only that we're getting the data, that it's gonna be a lot cleaner in collection than it was in version one. - So from a technology perspective, I know you've been, obviously, learning as you go with the VR, and that's been a huge part of what you guys are doing, but can you talk a little bit about just some of the improvements you've made from a VR perspective now as you roll into version two? - So VR's a text-based, it's advancing very rapidly, and it's independently of any demand signal from the DoD or, I'll say, training spaces. That's being driven by the consumer market, and that's moving fast, so we just want to keep up with that as best as possible, but how do we structure a training environment around a tool that is capable as VR is really the great challenge, and I think where a lot of good insights are coming. We have built, I'll say, some very customized, 360 videos for training, and we'll see how the students value that. We've built emergency procedures trainers, where you are in VR, and you can interact with an environment to do, to solve, emergency procedures. It's gonna be kind of the stand up of the future, as it were, where you get to interact, and everybody gets to watch your interactions and see actually what you would do, and you have the checklist there, and you actually manipulate the aircraft in the process, so there's some of those tools, some interactive and engaging content development. We're also lookin' at providing scenarios. I think one of the great advancements that we're gonna be able to leverage in version two that was not quite ready for version one is our artificial intelligence tutor that, when the students go home to their dorm rooms, they're not just creating bad habits, but there is something there watching them and providing them feedback on their performance. It's also something that, hopefully by the end of version two, we can figure out how best to use it as an instructor offset tool so that the AI can provide at least a baseline set of grades that the instructor can go back in and modify as he or she sees fit and can can say hey, just monitor altitude and air speed for me. I'm gonna keep track of the rest and offload some of that instructor performance, or instructor requirements, to the AI. Another thing that it does, besides instructor offload, as well as tracking student performance in their off time, when an actual instructor isn't there, is that it tags the data, and this is one of the most important things, is that, if I'm gonna analyze student performance, I need to know what that performance is. Are they doing a loop, or were they doing a nose-high recovery? That's a significant thing. You grade those differently, and the AI actually can help us tag the data so that we know student intent within it, and then we can go back and measure things a bit more objectively, and that's very helpful, is that it produces a tagged set of data, which makes it that much more usable for analysis. - I think, sometimes, too, getting lost in the mix can be the academic side of learning how to fly an airplane, and so talk a little bit about the restructuring because I know that that's been a significant effort by your staff as well. - And it's, to be honest with ya, it's one of the proudest things that I've seen going on, that makes me very happy to see across the institution right now, is there's a lot of energy going into academics across the UPT wings, and we are trying to collect all that as best we can. As all the other organizations out there are building content, CSI instructors are making videos, and they're putting different content online, we're trying to collect it all into our learning management system so that we have as much content as possible. The idea is to structure two spaces. The sanctioned space on one side, where if you wanna progress through academics, you have to do these things, but on the left side, there's this social space, where the gauge, as it were, gets collected, and it gets valued, Amazon style, thumbs up, thumbs down, three stars, five stars, and as students like things and it goes up in value, it's something that emerges socially from that space, and we can look at porting it over into the sustainment side, so there's not just stuff that we're doing. There's a lot of energy across the UPT enterprise right now to create content, and we're doin' our best to capture all of it, but it's difficult to keep up with them because there's a lot of energy out there. - And just like everything else, I feel like it is kind of recurring thing, but use of this academic structure, really, has taken away some of those artificial time constraints that have been placed on learning. - Yup, so we did, during, after completion of version two, or version one, before the start of version two, we took eight simulators, correction, four simulators, to the Air Force Academy and selected eight students from the Academy to do a distance learning program with those things, and they spent lot of time repairing them. The were not stable at all, so they actually spent, just like the version one students, a lotta time just keeping those things fixed because it wasn't a fixed base kind of thing. We didn't have all the local support that we had here, so with a lotta good learning there, so they spent more time fixing the sims than they did leveraging them, but they got about eight weeks, and it was all AI graded, so all their maneuvers were instructed, demoed, and graded by our artificial intelligence, so we have grade books for these eight lieutenants that was built entirely by an artificial intelligence instructor, and it's showin' progress, so they were able to get, from an objective measurement of performance, up to goods and excellents in their grading for individual maneuvers, with pretty consistent over time. Also, we put them through our, kind of, talent selection process. We had their performance, their grade books. We gave a checkride, a long distance checkride, so our check pilots were here in Austin. The students were flying their simulators at the Air Force Academy, and we basically monitored their performance and provided feedback for them, and the instructor feedback was that their decision making, task management, and just the way they managed themselves in the environment was on par, with someone 14 to 16 rise in the program normally, so as soon as we get these guys to a point where they're able to fly, we're gonna give 'em a little checkride and see how much they did actually learn and how we can value that time that they spent ahead of time. Ultimately, we picked four of the eight to come in, and we'll see how much gains, or how many gains, we had from the time that we spent there with them ahead of time. - And this is really a good point to kinda transition into that selection piece of how you got to your class size here at Pilot Training Next, which also does include not only sister service pilot candidates, but also an international candidate. - So, we have, we had a lot of variety within our cohort, this version, so we went with the 15 officer, five enlisted model that we did in version one with the enlisted being the primary way that we're testing our selection metrics. If we want to innovate within the selection space, we're lookin' to reach outside of the legacy cohort and our legacy talent pool, and that's what the enlisted represent for us, so these are great Americans steppin' up to capitalize on a phenomenal opportunity, and they're teachin' us a lot about how to look for talent and the metrics that we have there. We also added six RPNX students that are gonna do a sim only program with us, so we actually have 26 students instead of just the 20 that we had last time. Within our officers, the 15 officers that we have, that are in the, kind of in the PTN pilot track, we have the four from the Academy, that special kind of selection there. We have five from the current system. We have one individual that's upgrading from an 18X to an 11U, so he's an RPA guy, special selected to be brought in. We have two Navy as well as one RAF student, so we're combined and joint in this process, then we have two Guard guys as well, so very diverse and robust group of folks, and I'm actually very happy with these guys. They all come in very energetic. It's an important part of this, honestly, is, and we have to be, it will skew the data in every way, or in a lotta ways, but you can't just bring anybody into this program. You need talented people that are going to help you iterate. Right now, the thing that is in question is not them. It's the training process, and I need people that are going to be aggressive, that are gonna be able to understand how we learn in this environment and tell me when it's wrong and not just be imposed, so we need the right kind of people, and these guys are very energetic. - So, to kind of wrap this up, I know you have spent a lot of time over the last year going out and educating the Air Force at large about Pilot Training Next, and you've had a lot of high level visitors, including the Secretary of the Air Force, roll here through Austin, to see your program, but there still might exist some untruths out there, word of mouth, so now's your chance to kinda dispel that myth, if you will, so maybe, what are some of the myths that you'd like to dispel? - I think probably the most egregious one I've heard is that we've brought in all students that have, like, thousands of hours of flying already so that the program's guaranteed to succeed. I'll say that our success is in our failure. We're not trying to pad anything. The intent is really to test the technology, so as I alluded, I need special kind of Airmen that know how to do that kind of thing to help us test this technology. If they can't learn with it, no one can. If they can learn with it, then we can start scaling back into more representative samples, but starting out, we just gotta see where that goes, so while we did strive to take volunteers, and we want competent people that can help us out, we didn't try to pad the stats with 1,000 hour students. I think a lotta speculation has gone into how our students are doing right now as far in the AFTUs and all that, and our feedback is that, I'll say that we have students performing across the bell curve. Some are better than others, as you would expect, and the feedback is something's good, somethings need work, obviously, so the alpha version, we were never expecting to be 100%. You know, this is the goal, so there's some good learning there, and there's just a lotta speculation about how they're performing, and a lot of, I'll say, without going into specifics, there're just misconceptions about how the students are doing overall. I think ultimately, what I would like people to know about this program, to help shape their thoughts, is that we are doing our best to be as objective as we can about the technology and the tools and the processes that we're bringing in. I'm structuring a way to make sure that the data that comes outta this thing is looked at in a variety of different ways before we validate anything, so I think the most important thing we can do is be objective, so what that truly means is that when we fail, and we fail often, we need to be open and honest about it. This is the type of thing that doesn't occur often, and our success is determined by how fast we fail, how often we fail, and how small we fail, so we need to structure ourselves to make it a painful environment. It's gonna be tough. That we're gonna be pushin' the tech further than it can go. We're gonna be asking an instructor pilot to monitor six students when the tech does not allow six students to be monitored effectively, so what do you need if you're gonna monitor six students? What do I need to give ya? And where's the tech gonna help us do that, so it's hard for these guys to endure it with the questions that we're asking, and we're pushing small failures like that. Hopefully, by the end of this thing, with all those small failures, we will have built something and learned enough to be able to build something that the institution can lean on. - Ultimately, our goal is to make our Air Force more lethal and ready, and innovation is such a huge part of it, and so much innovation going on here, but we wanna say thanks for your time, and we appreciate it. - Pleasure, thank you. (booming) - Wow, just a ton of goodness happening up in Austin at Pilot Training Next. Exciting times in the flying training world indeed, and of course across the entire first command. We wanna say thank you to Lieutenant Colonel Vicars for takin' some time out to talk to us despite a very hectic schedule up there at PTN. As a reminder, you can follow Air Education and Training Command via social media on Facebook. You can also follow us on Twitter and Instagram as well as the web at www.aetc.af.mil. Thanks for checkin' out the podcast as we dive into the world of recruiting, training, and education for our entire AETC Public Affairs staff. I'm Dan Hawkins. so long, and we'll to you next time on Developing Mach-21 Airmen. (intense music)