Joanne Z. Tan interviewed Tammy Roust on AI future, AI data, crypto, synthetic data, AI mistakes, AI governance, SLM on Interviews of Notables and Influencers

AI Future, Synthetic Data, AI Mistakes, AI Governance, Crypto Regulations, Knowledge Economy – Interview of Tammy Roust by Joanne Z. Tan

Let’s Connect

Blogs About Brand-building Tips, Marketing Strategies, Insights & Stories

Thank you for subscribing. Appreciate your comments and sharing!

Subscribe to our Blogs

Cutting insights from Tammy Roust, an AI expert, former NASA senior engineer, Chief Data Officer for State & Federal governments and at fintech companies, writer of cyber security law. She has been on the cutting edge of AI, data, and at the front seat of the 2008 financial crisis, as well as crypto regulations.  In this interview, she discusses AI future, crypto’s fate, the danger of synthetic data, the future economy, and her personal stories on how she earned a PhD, three Masters Degrees, and various accomplishments while being a mother to three young children.

This ENTIRE interview has three parts:

– 1st part traces Tammy Roust’s career journey from NASA to Fintech,  her hands-on experience with AI almost 20 years ago, then the important roles she played as Chief Data Officer at the state and federal  government levels, and at a fintech startup;

– 2nd part is about fintech, crypto, AI, data, governmental regulations, and the future economy;

– 3rd part is about her personal story and goals.

Some of the insights will be quoted as a part of the upcoming book by Joanne Z. Tan:“Brand Building for Leaders and Organizations”. Subscribers to the 10 Plus Brand Newsletter will get below market pricing and a sneak preview.

This transcript of the interview is edited with sub headings for reading convenience.

To watch it as a 1 hour video

To listen to it as a 1 hour podcast

Introduction of Tammy Roust: NASA, data scientist & AI pioneer in fintech; Chief Data Officer at state and federal governments; Crypto regulation

Joanne Tan 0:00
Welcome to the Interviews of Notables and Influencers. I’m Joanne Tan, the host and producer. I’m here with Tammy Roust or Tamara Roust, who, among other things, in her 20 plus years of career, has worked as a senior engineer at NASA as a quant. Well, how do you define quant – that is an expert at analyzing and managing quantitative data – at a hedge fund about 20 years ago, and as the chief data officer of the state of Illinois and at the Commodity Futures Trading Commission (CFTC). And Tammy has also been the lead responsible for AI efforts at multiple organizations including the CFTC and a startup. We’re going to pick her brains today on many issues relating to AI, data, crypto, governmental regulations in these areas, and her personal stories of earning a PhD and three master degrees, succeeding in her careers while being a mother to three young children. This interview consists of three parts: First is Tammy’s career journey from NASA to FinTech and her hands-on experience with AI almost 20 years ago, then the important roles she played at the state and federal government level and at a startup. The second part is about FinTech, crypto, AI, data, governmental regulations thereof, and the future economy. And the third part is about her personal story and goals. Welcome Tammy!

Tammy Roust 1:44
Thank you, Joanne, it’s a privilege to be here, and I want to be clear that I’m appearing here in my personal capacity, and that my opinions are my own and not those of my past, present or future employers.

Part 1: NASA, Fintech, Government – Data Engineer, Data automation with AI in Fintech before & during 2008 Financial Crisis, & during Covid at State of Illinois, Financial Regulator of Crypto at Federal Government

Joanne Tan 1:56
Okay, noted. so first part, NASA, FinTech, government, your career journey. From your diverse and brilliant careers, what has been the enduring driving force behind so many accomplishments?

Tammy Roust 2:11
Thank you for saying so. For me, that the driver has been curiosity, the need to feel challenged, and that I’m learning new things. I need to know how things work and why they are the way they are. Some of that is institutional history – things are the way they are because, and it’s maintained by somebody who’s been in the institution for a while, and some of it’s really subject matter expertise, and some of it you can really get from studying what industry you’re in. I feel best at change oriented organizations, where things are moving quickly, and I can learn a lot from the experience as the changes come about. So when I was moving from Wilshire, which is a private sector financial services entity, to the Credit Agricole Amundi fund of hedge funds that I was at as a quant, I was working with long, short hedge funds at Wilshire, and I wanted to understand the rationale behind investing at a hedge fund or in alternative investments in general, versus investing in the more traditionally available mutual funds. And that’s why I, that’s why I went into a hedge fund in 2007 which might not have been the right time to do it, but it was the time that I did it. So curiosity is also why I left NASA, believe it or not, not the Rover Curiosity, but rather the curiosity as to whether the predictions I had made using the data available would hold when I had a lot more data available to make those predictions. That was the purpose between moving from NASA to Wilshire, so moving from public sector to private sector.

Joanne Tan 3:41
So you mean, at the time when you left NASA, there was not enough data for you at NASA to experiment with database predictions at scale. And you find that in the FinTech, the financial public sector, there’s more data for you to play around. Is that right?

Tammy Roust 4:04
Yes, that’s correct. When I was building models, it was really only 20 or so missions that had flown. And so when I got into finance, there’s a trade every femtosecond, even faster now these days, with with quantitative trading, with high frequency training,

Joanne Tan 4:23
Okay, I want to go back a little bit about NASA, you know, because it’s a very glamorous name. Only a limited number of people can get their dream jobs there. Your first job in your 20s was a NASA engineer, which actually started your later data, AI, and fintech journey for the next 25 years, what did you accomplish at NASA, and what did you do there that was leading to your advancement in this data driven career?

Tammy Roust 4:59
When you start at NASA, you really have to hit the ground running. It’s an amazing amount of terminology that is very specific to aerospace and defense industry, and you have to master it in short time. And I needed to write code for modeling cost and risk too. I had to do all of those different things when I was starting. But once I’d done that, I realized that everything that I was doing was based on about 25 or so missions over about 35 years. And that’s not a very rich data set to work from. There were a lot of idiosyncrasies in the data where some mission would be very different than missions that we would do today. And if you compare that to finance, where you have a trade every femtosecond or more frequently, and those trades are very uniform. You can do a lot more quantitative analytics around it, and I could really examine whether my models would work. But to go back to NASA for a minute, I had written Monte Carlo simulations. So what’s a Monte Carlo simulation? So Monte Carlo simulation – I got asked this by a chairman once – a Monte Carlo simulation is basically you generate a random number, you assume that the variable in question follows some sort of a distribution, and that random number lands you somewhere in that distribution. You take the value of that distribution, and then you run it for like 50,000 more times, and you take each of the numbers from that distribution, and you look at the overall value. So a Monte Carlo simulation, in this case, would then be used to take all of those 50,000 simulations, and you would add up the value, and take the average, and that would be the value for that variable for the mission. And I had written code to do that for the International Space Station. And I was also working on projects for flight proposals. And one of those proposals, The Orbiting Carbon Observatory, won and became a flight project, and in its launch, it wound up at the bottom of the ocean near Antarctica. But luckily, I had been working on project management for that, and so the exact copy of the mission was then approved, and finally launched in 2014, and it just finished its extended mission, which was taking observations of the atmosphere over North America.

Joanne Tan 7:17
Observation of carbon… uh…

Tammy Roust 7:20
Correct, it’s our… they were observations of carbon dioxide specifically, that mission was looking both OCO and OCO2. We’re looking at carbon dioxide levels over forests. The North American continent has a huge forest that they can look at and figure out whether or not forests reduce carbon dioxide levels. That was the interest.

Building early AI model for data automation for hedge funds before & during 2008 Financial Crisis

Joanne Tan 7:40
Right. So then you left to join the hedge fund Wilshire, which is a fintech consulting company founded by another NASA engineer. Okay, so what did you do there?

Tammy Roust 7:56
So I left to join Wilshire, which was founded by Dennis Tito. Dennis Tito had worked at JPL, once upon a time, I worked with people who worked with Dennis Tito, but that’s not how I got the job. That’s another story. So when I went to Wilshire, I worked in the division that actually built their equity risk modeling software that was used by companies like Fidelity, and State Street Global, and countless mutual funds. So they were using it for risk modeling of the equity trades that they had. And I got access to tons of data and the ability to break down model production into individual pieces and to (the) automate those pieces so that those companies, Fidelity, State Street, etc, got an automated daily update of the risk model, as opposed to the manual monthly process that they’d had when I started there.

Joanne Tan 8:45
That sounds…yeah, that sounds like what AI is doing today: automate and streamline operational efficiency, but you were already hands-on back in 2008 and… 2007-2008 before and during the financial crisis. So what data automation fintech project did you design and develop that helped with decision making while you were in the hedge fund industry?

Tammy Roust 9:13
So I started Wilshire in 2002, 2003, something like that. And by 2007 the work I had done for automation at Wilshire had caught the attention of a fund of hedge funds that was looking for a better way to measure risk and return of their individual hedge fund investments. They had staff whose job it was to compile data on all of the hedge funds they were invested in, all months, and imported it into the model. And there were over 200 hedge funds that they were invested in. So it really did take those people all month to compile that information, and by the time the end of the month came around, that data was stale. So by the time we got to the end of the month, it really wasn’t that useful, because it had been a measurement at the start of the month. So I started off automating all of these tasks, today we would call that data engineering, and data integration, and data science work by creating a structured data store to store all of the information that was previously stored in an Excel file to put the data in. And then I started connecting up all the disparate systems to the data store. So first you create the data store which has all the daily that you need. And then you take all the individual pieces and you point them, you start one by one, pointing them to the data store and ensuring that the data flow goes into the data store to start with.

Joanne Tan 10:34
I see.

Tammy Roust 10:36
And then we were really cooking, so we had Bloomberg hooked up into the data store. We pushed data from Bloomberg into it, and then we connected the models and automated the documentation. And so we went from when I started in July 2007 where there was one person whose job was to just enter data all month long, and to a task that took about two hours per month by the time we got to January 2008. And that became really essential, because in march 2008 at the onset of the financial crisis, we were all on a company retreat in Marseille, France. We were all climbing down a mountain, and everybody’s BlackBerry started going off. It was unreal. And they started frantically like rappelling down off the mountain to get to the base so they could check their blackberries. It was crazy, and we had discovered that Bear Stearns had gone down, and we needed to run the models. But our office is in Chicago, and we were in Marseille, France, and I’m like: “Don’t worry, because these models operate semi autonomously.” So we’re getting… so we were able to run them from Marseille, and we came back into the office. And unfortunately, we’d made it obvious when we were in Marseille that that’s what we were doing. So Paris realized: Oh, Chicago is automated! So every day, at the end of the Paris day, they would call us and dump us with the what if scenario: What if Greece goes down today? What if Greece defaults? What if Italy is kicked out of the EU? What investments can we get out of? How bad are we off? And how can we get to cash if our customers want out? And that was the situation for pretty much every day until early 2009. So the fact that I had automated all of that in 2007, and really early 2008, was really essential for the business to continue operating through the financial crisis in 2008 and on. And then ultimately, this is Amundi, the hedge fund. They decided, the Credit Agricole fund of hedge funds, which was called Amundi at that point, decided to exit the US market in 2012 and that’s when I went…I really need to have more controls over this market, because I had seen some of the worst behavior in the hedge fund market when I was in it.

After seeing the worst side of the unregulated financial market, Tammy Roust joined the governmental regulatory side, as CFTC’s Chief Data Officer

Joanne Tan 12:52
And you decided, uh, getting to the government side, the regulatory…

Tammy Roust 12:58
Correct. I joined the CFTC in their regulatory side, after the Dodd Frank Act was passed.

Joanne Tan 13:04
I see. So now we are segueing into Part Two: Governmental regulations in fintech, crypto, and other areas. What experiences in your life made you realize the impact on ordinary people’s lives from unregulated financial markets, institutions, and instruments?

Tammy Roust 13:27
Well, when I came back to the public sector in September 2011 or 2012, it was as if someone had… it was as somebody who had seen the financial crisis from the start, from the front lines, and what I saw from the hedge fund that I was at, was a lack of transparency and liquidity. The hedge funds we were invested in, I would say, “Hey, I have $5 million to invest, but I need to know what your exposure is to Greece, to Italy.” And they would tell me: “We’re not going to tell you.” And I would say that “I can’t invest $5 million in you”, and they would say, “that’s fine, we’re good with that.” And it was just surprising to me that then investment committees were willing to take that for an answer. They would go ahead and invest the funds anyway. And when Madoff happened, I was not shocked. I was shocked that there wasn’t more funds that turns out had been doing nefarious activities. I saw a lot of not transparent funds, and a lot of illiquid funds, and I saw a lot of market turbulence resulting from the lack of liquidity and the lack of transparency. Even the people who were experts in the market at that point were fearful because they didn’t know what everybody was doing. We had exposure when I was at Amundi, we had exposure to a lot of hedge funds that were closed to redemption, that that we couldn’t get our money out, and that was money for widows and orphans funds, and insurance funds, and pension funds. But that at the same time, our investors were pulling their money from us, so we were stuck in a really bad space. No one wanted to lend money overnight because they weren’t sure if the company they were lending to was going to go under overnight, and they weren’t sure whether they could get the funds back. Everybody was cutting credit lines, calling in their debt, and going to cash. Money funds broke the buck, which means they’re supposed to be set to a minimum of $1 and they weren’t. And the standard money market fund, which you were depending on to at least do better than your savings account, was now paying negative interest. It was doing worse than your savings account. That was my experience in the financial crisis, and that’s why I moved into the public sector regulatory side.

Joanne Tan 15:41
What specific government agency was it you moved into?

Tammy Roust 15:45
I moved into the Commodity Futures Trading Commission, because the commodities markets are mostly out of Chicago, where I am located.

Joanne Tan 15:56
Right, Okay, so that financial crisis and after-effect, started from march 2008 until, like a few, a few years afterwards, It was …

Tammy Roust 16:10
Correct. So for those of us in the financial sector, we continued to experience issue… we continued to experience structural unemployment well into 2012.

How did governmental regulation save Ledger X, the only solvent piece of the defunct FTX, Sam Bankman-Fried’s crypto scheme

Joanne Tan 16:20
Okay, all right. You described that there was only one piece of business of the defunct FTX, the Sam Bankman-Fried’s crypto scheme, that remained solvent after FTX went bankrupt – your regulatory foresight kept it solvent. Tell us about it.

Tammy Roust 16:40
So the only solvent piece of FTX was the regulated piece, and that was Ledger X. Ledger X is a clearing house of digital currency derivatives. A Clearing House is a place where you and I do a trade. I have exposure to you because I did the trade with you. I have what’s called counterparty risk. You have exposure to me. That’s called counterparty risk. Either one of us can default, or we can exchange that risk for risk to a clearing house. A Clearing House has great credit. It’s kind of like trading the… the…you’re loaning your friend 50 bucks, and you’re you’re taking the risk that your friend’s going to pay you back. Well, what if, instead of loaning your friend 50 bucks, you could loan JP Morgan Chase 50 bucks. You would much rather loan JP Morgan Chase 50 bucks, because you’ll have very clear terms, and JP Morgan Chase has a solid credit rating. So that’s what a clearing house does.

Joanne Tan 17:38
Oh.

Tammy Roust 17:38
They have the solid credit rating, and you switch it out from your friend who you owe… who OWES YOU 50 bucks, but you’re going to have to ask for money back to JP Morgan Chase, and your friend does the same thing.

Joanne Tan 17:50
I see

Tammy Roust 17:51
Now he owes 50 bucks to JP Morgan Cha se instead of you.

Joanne Tan 17:54
I see, so that protects the transactional parties from defaults.

Tammy Roust 18:00
Correct,

Joanne Tan 18:01
And that requires a segregation of customer funds from the operational funds of FTX.

Tammy Roust 18:07
That is correct. FTX bought Ledger X in 2021 and then lobbied Congress for digital currency derivatives to be allowed to be directly traded by individuals without any of the usual intermediaries that provide liquidity and risk management functions – that was a very near miss because there were a lot of folks who thought strongly that people should have better access to cryptocurrency, and they thought this was the way to go about it. At the CFTC I wasn’t the only one working on the Ledger X application. All of us who worked on Ledger X that did examinations of ledger X get credit for this one, for ensuring that Ledger X’s customer funds were properly segregated so they couldn’t be used by FTX as part of settling some of their trades, and that the technology controls were in place to keep those funds secure. That was our job in reviewing Ledger X’s application. Ledger X was sold for $50 million and in my experience, that’s a strong example of why we have segregation of customer funds from company funds, and why we have regulation at all.

Joanne Tan 19:14
Okay, so that’s why that’s the only solvent part, thanks to the regulation, implemented.

Tammy Roust 19:23
Correct.

The future of crypto or digital assets like Bitcoin and blockchain: Transparency and liquidity from stable coins backed by US T Bills

Joanne Tan 19:24
Okay, so crypto or digital assets like Bitcoin, and blockchain, what are your opinions about where they’re heading?

Tammy Roust 19:34
Crypto assets are actually headed for immense future growth and more stability as we migrate into an environment where there’s regulated stable coins. As regulated stable coin represents an advantage over a bank account in that it has better accessibility and inclusion for the unbanked population, and it allows for greater velocity of money. And if we use the stable part of the stable coin, if it uses US currency as the backing, as a lot of stable coins do, they use US Treasury bills or bonds or notes, then it strengthens the use of the US Dollar as the world’s reserve currency. So it results in a strong dollar. In addition, distributed ledger technologies allow for near simultaneous trade and settlement. I send a communication to the blockchain. It’s publicly viewable. The distributed ledger is publicly viewable to everybody. Settlement occurs at nearly the same time as it goes through the consensus mechanism, through all of the different members of the distributed ledger. And it happens regardless of where I am. I use my phone to do it. I don’t need anything else. I don’t have to have a brick and mortar location. I don’t even have to have a physical location that you know, that’s an incredible advantage in an increasingly global world where I don’t want to be worrying about international fund transfers. Yes, and I don’t know that I’ll have anything except my phone with me.

Joanne Tan 21:02
Yeah, so you mentioned earlier from the 2008 financial crisis, you realized the the financial market lacks transparency and liquidity. Do you think this helps with, obviously, with transparency, AND with with liquidity too?

Tammy Roust 21:22
It does, because US T bills are extremely liquid. Stable coin adoption in particular results in additional transparency and liquidity because US T bills are widely available and traded worldwide.

Crypto can strengthen US Dollar as the world’s reserve currency, if the US federal government develops a regulatory regime for stable coins

Joanne Tan 21:35
Yes, I would like some more lights shed on how crypto is going to strengthen the US Dollar as the world’s reserve currency. Trump administration seemed to be on the path of legitimizing crypto. But that’s not unique to the US, other countries,…

Tammy Roust 21:55
No.

Joanne Tan 21:56
Yes, other countries are using crypto as well.

Tammy Roust 22:00
Other countries are using crypto, we’re the ones who are going to develop a regulatory regime where we’ll have a regulated stable coin. We currently have stable coins already in the United States, and we have investor protections for people who are currently in stable coins. The other advantage here is that the US currency is more stable than other alternatives. And so the purpose of a stable coin is for it to be stable. You don’t want your cryptocurrency to be wildly oscillating in value from day to day, when it’s the thing that you’re using effectively as your checking account. You want to know that your cryptocurrency of choice is going to be the same value-ish from day to day,

Joanne Tan 22:43
Another question is…

Tammy Roust 22:44
Otherwise you can be in an investment contest, and that’s a different animal.

Chief Data Officer (CDO) at state, federal governments, and for a fintech startup

Joanne Tan 22:49
Ah! Okay, you have been the chief data officer, CDO, for state and federal governments and at a Fintech startup. What are the common challenges shared by governmental and private sectors? What are the issues unique to each of them?

Tammy Roust 23:05
It probably won’t surprise you that a federal agency or a state government or a fintech startup all actually share many common features. The first one is they have shallow pockets. None of them has big money. Most of the time, the budget I’ve had, whether I’m at state, federal or private, it’s much smaller than the one required to do all the work that gets done. every time I’ve worked cross functionally with different departments to figure out what use cases are most valuable to the organization, and which are going to give us the highest return on investment, and reduce our risk the most. The difference between the public and the private sector firms is that with the public sector firms, you can’t really use financial incentives as a motivator. You need to figure out what is truly motivating for your staff, because I can’t pay them anymore. I’m not allowed to. I often am very limited as to how bonuses are paid out. And really some… there might also be collective bargaining agreements that restrict what I can do in terms of pay and benefits. Some of the people are really motivated by being true believers in the mission of the organization. Some like to work with cool technology. Some really are helpers, and they wanted something to give back to the community. And with the private sector organizations, issues of budget remain, but I’m much more free in terms of what I can pay, in terms of pay, benefits, bonuses, etc. They might also be aligned to the mission of the organization or doing working with cool tech, but I can’t depend on them being motivated by, by, by a helping need to help the public when we’re in the private sector.

Using AI Data during the Covid19 pandemic for the State of Illinois

Joanne Tan 24:50
Right. During COVID, as the Chief Data Officer for the State of Illinois, what were the challenges? And your solutions using data and AI?

Tammy Roust 25:05
During COVID, the challenges at the State of Illinois were around siloed data, legacy technology, and ensuring privacy, and keeping all the stakeholders apprised of our progress. So for example, after the experience Italy had in having to ration care for COVID patients, we were very concerned about making sure hospital beds were available for COVID patients in Illinois, and that we could get testing out there, but their hospital reporting systems were in a 1980s era system, and we had to really work to do a business process re-engineering and automation, so that all the data flowed from the hospitals, which were supplying data on how many beds they had and how many were occupied by COVID patients or ICU or or whatnot, and, to us through this 1980s era engine, and then needed to be trickling downstream to an operational dashboard that updated frequently, so that the Governor and other decision makers could actually look at these dashboards and make decisions about what was going on. So once we did the re-engineering and automation, we used AI to identify likely errors early on in the process, so that we didn’t have the public health folks chasing down fat finger issues. So we quickly learned that some hospitals were capable of expanding their beds by 25%, that the limiting factor for those hospitals was nurses on hand to treat the patients, and not actual bed capacity. For some hospital, for OTHER hospitals, they didn’t have that capacity, they didn’t have that room, and so anything exceeding a 25% expansion was likely a fat finger error.

Joanne Tan 26:47
You mentioned using AI to catch human errors in data input. Can humans catch AI errors?

Tammy Roust 26:56
Yes, but it would occur much more slowly. In this case, the hospitals were supplying that data at 11:59pm, every night. It was one of a dozen things that a midnight charge nurse had to do during an era when staffing was light and they had a lot of things to be concerned about and very sick patients. We didn’t want the public health nurses to come in six hours later and to be calling all of these hospitals for things that looked like obvious errors when we could correct those things and flag only the ones that were likely to at be actual errors. So the hospital that could expand 25% we don’t need to call them unless you don’t have any other calls to make. There were cases where the public health nurses were making literally dozens of calls all day long to hospitals trying to evaluate their bed numbers. And we wanted to release our public health nurses from performing that function. They needed to be working on other things, because we had a lot to do.

Joanne Tan 27:59
Yeah, well, you played a pivotal role with AI and the data. So you had almost 13 years of experience at the Commodity Futures Trading Commission, CFTC, in different capacities. What were your major accomplishments, and how did they lead to your current work in AI, and how will these experiences help with your future goals?

Tammy’s major accomplishments in her 13-year stint at CFTC (Commodity Futures Trading Commission): safeguarded crypto trading through an intermediary Clearing House; rewrote cyber security law, used AI to tackle financial frauds

Tammy Roust 28:27
So some of my major accomplishments were to review the technology for the application of Ledger X, as I discussed earlier. And the way that clearing works is, if I buy bitcoin and sell a contract to you, then you have risk exposure to me, and if I default on that contract, you’ll be out of luck if you don’t have a clearing house as the intermediary. With the clearing house, we don’t have exposure to each other. It doesn’t matter that I sold a contract to you, or if you sold a contract to me, it matters only that I have exposure to the clearing house, and the clearing house has our collateral on hand to claim if we default. It was my job to review the… it’s very important the clearing house has strong financial risk management and technology controls because of that. So we are exposed to the clearing house, and the clearing house is exposed to us, but I’m not exposed to you. So that means the clearing house needs to have those controls, and it’s much more efficient for a clearing house to know all the people trading it with it, than it is for you and I to know each other and do counter party risk management on all of the people we trade with. It was my job to review the technology controls and make sure they were sufficient and risk management controls too. I reviewed them, I required some revisions for the Ledger X application, and Ledger X was approved. And then Ledger X was later aquired by FTX which wasn’t regulated and had numerous control deficiencies, including financial controls, which is how, as we later found out, and then Ledger X was the only non bankrupt piece that FTX had, and it was sold by the FTX bankruptcy trustee.

Another major thing that I got to do at the CFTC was to rewrite regulations. And that’s as exciting as it sounds, I like to say I went from one kid to three kid in the amount of time it took me to rewrite ONE regulation. But I did it. So I was writing regulations to make cyber security operations clearer about what cyber security controls had to be performed at regulated entities, and that’s how I got closer to data, because I was looking at vulnerability testing at these places. Vulnerability is effectively a reported piece of code that can result in you getting a hacker into your environment. And so vulnerability testing tests known vulnerabilities, known pieces of code, tests for known pieces of code that are known to have security flaws, and it tests it throughout your environment. And a lot of, a lot of the regulated entities I saw were not doing this on a regular basis. They weren’t doing regular patch management, they weren’t doing regular penetration testing. And so I had some entities that weren’t doing it at all, and I had some entities that were doing it at a pretty good frequency, and I needed them all to at least adhere to a minimum industry standard. And so that was why I wrote the regulations that I did.

The other part that I learned from writing those regulations was that it wasn’t possible for a human to track those vulnerabilities anymore manually, because it… they were growing at such a quick pace that it was necessary to look at big data ways of ingesting all of the information and prioritizing it in terms of the, of the environment that the regulated entity had. So that work got me more interested in the data side of the risk management function, and I wound up doing volunteer projects to enhance my knowledge and data science and artificial intelligence, and that’s what took me to the State, and eventually back to the CFTC as their Chief Data Officer.

And then when I came back to the CFTC, I took what I had learned about AI at the state and I applied it at the CFTC to start building out their AI projects, which included natural language processing and ingestion of large numbers of documents as they… as the CFTC needs to do for enforcement cases. And I learned that there was a huge problem with financial fraud. So a lot of fraudsters in the payments market exist. There’s a trillion dollar market for financial fraud because of the difficulty in assigning payment to the correct recipient. And then we can use, and that we can use AI to help us ensure that the funds for payment only go to trust identities who have validated their identity with a trusted provider. We see this these days. Whenever we send a Zelle or a Venmo and they tell you to validate: Are you really sure that you want to do that? You know you can’t actually get this money back. I feel like every bank is telling you that. And I’m… I look at that and I go, you realize this is me, right? This is me sending money to myself. It’s me. You know me. You know I send money to myself every month. Why do you do this? Because they need a third party, trusted identity provider. That’s why.

Part 2: AI, data, the future economy

What’s next for AI data: dynamic system, process re-engineering approach

Joanne Tan 33:30
Okay, now we’re coming to the part on AI, data, the future economy. Some experts and authorities believe that AI is being hyped up too much, while others believe that AI is revolutionizing all aspects of life in every industry. What do you think?

Tammy Roust 33:56
I actually don’t think AI is being hyped too much. I think it’s transformational. When electricity was first available, a lot of the functions that we currently do today had to change because they were just doing the same thing with electricity. They just slapped electricity onto a completely manual process. You can’t do that to get really productivity, to get the real productivity improvements, you have to change the process, and that’s the same thing for AI. You can’t just slap AI on an existing process and expect huge improvements forever. You’re going to have to change how you do the process in order to really start getting the gains from AI. So I was at this talk where there was a manufacturing firm that had something like 89 different languages spoken on the floor among its workers. They used AI to translate their benefits documentations into the relevant languages, because there’s no way they were going to be able to have 89 different translators, or, you know, one really or even one really articulate translator speak to all of these different elements of their HR documentation when it comes to signing up for health benefits and, and all of the things that are really required of you as an employer. But a process re-engineering approach would use AI in a more dynamic manner, instead of, you know, converting these documents into, into a language, they would instead use it to help employees navigate through the different options that are available to them in their native language. So that’s the difference between the first generation AI taking a flat file and converting it, and later efforts, where you’re using a dynamic system instead of a static system.

Joanne Tan 35:37
So the dynamic system is the, in other words, process re-engineering approach you mentioned

Tammy Roust 35:44
That is correct, a dynamic system. In this case, would be a project re-engineering approach, where, instead of the process being you give the employees documentation and they give you back forms, you create a login system for the generative AI prompt, and then as the employee starts you, you create a workflow, and as the employee starts asking you questions, you start filling out those forms for them and process it electronically, directly to them, directly to your back end benefits system, so that it takes your employees much less time, as opposed to this: great here’s my document in my native language, and now I need to go to the form, which I have to fill out in English that I’m not really very conversant at.

Worsened by AI, we’re losing our capacity for human connection

Joanne Tan 36:36
Okay, so what are the biggest threats, even existential threats to humanity and our planet from AI?

Tammy Roust 36:47
The biggest threat that I’m seeing is actually not directly attributable to AI, but it will be worsened by AI, and that’s we’re losing our capacity for human connection. It’s so easy to get what we want done using existing systems that when we’re faced with real humans, we have a tendency to get frustrated, and because we expect it to be faster, we forget that this is a real person with real feelings, and we don’t know how to cope with a person who has real feelings, as opposed to a system that is responding generically every time we interact.

Joanne Tan 37:27
So our… in other words, our social skills, our people skills, our soft skills, our communication skills, about how to relate to, human to human, will atrophy.

Tammy Roust 37:39
That’s correct.

Operational efficiency by AI, Agentic AI, Gen AI, no code NLP – where is AI now, and where is AI heading

Joanne Tan 37:40
Given hundreds of billions of dollars already invested and to be invested in Gen AI without seeing much profitability or ROI yet, is AI hitting a diminishing return? For AI, low hanging fruit in operational efficiency is easier and already cashed in, what’s next?

Tammy Roust 38:06
So the really costly investment is really for large language models, but AI is really more than generative AI. Generative AI can be really useful for the next stage of development, which is calling other no code functions, like natural language processing. When I was doing my dissertation, the researchers Brynjolffson and Hitt had their papers on the IT Productivity Paradox, and they found that investments in it were not necessarily positive in a return on investments. And now, 30 years later, I don’t think any of us would disagree about the productivity improvements of using, from using technology. I don’t even think we’ve even begun to hit on the operational efficiencies with AI. I think we’re still stuck on privacy, security and reliability, and the way through that is to limit the use of less reliable AI functions and move towards more reliable AI functions. So where we use agentic, things like agentic AI to call functions that have already been well defined, but to use the generative AI piece of it to produce a no code way of doing so.

Joanne Tan 39:10
So it sounds like the agentic AI is still quite a way out. We, we’re not…

Tammy Roust 39:15
It’s not. I’ve actually seen it in in place at, you can create within Amazon bedrock a lambda function, dynamically, and it will call that function for you using your generative AI. So Amazon bedrock allows you to use large language models.

Joanne Tan 39:34
So it’s all over the place. On the one hand, you we’re still stuck on privacy, security and reliability,

Tammy Roust 39:40
And the just that is still true, yes, yeah,

Joanne Tan 39:42
Just the operational efficiency part, where we haven’t really… we’re no nowhere toward the end of it. So it’s not really sequential. So the Gen AI is springing up while the operational efficiency part is being propagated, and at the same time, some other sectors are stuck on privacy, security and reliability. So it’s like, depending what industry, what, what specific sectors of the economy, they’re all over the place right now, is all right?

Tammy Roust 39:42
Yes, there is a very different capability maturity in terms of AI from industry to industry. So in particular, the legal and education industries are much further… they’re much more in the initial stages of AI development, and yet those are the ones that have the most textual content.

Will there be a shortage of high quality data for AI?

Joanne Tan 40:40
Okay. There is also a shortage of high quality data that leads to at least two issues. One is the copyright of high quality data created by humans, assuming that the court will uphold copyright protection of data against unauthorized use by open AI, will that be the ultimate limitation to AI’s power?

Tammy Roust 41:07
While copyright protection of data could pose limitations for AI development, in some cases, it’s unlikely to be the ultimate limitation to AI’s power. There’s always more content being produced, and in some cases, some entities will see it as a marketing advantage to supply them to large language models, as long as they get citations. There’s always more content, and that is available for training, even under copyright. The more likely limitation is the speed at which we can develop assurance around the AI functions so that we can trust the outputs. The governance functions will necessarily operate at a much slower pace than the AI development. That’s a good thing. We need guardrails. We will need guardrails and governance, and there will be functions that just don’t make the cut once we get into a governance context.

AI regulations should be in place AFTER AI development

Joanne Tan 42:00
So you think that AI regulations should not strangle AI development, but should be in place AFTER AI development.

Tammy Roust 42:09
Every case I’ve ever seen regulations follow development. You can’t put regulations in place before you have development, or you will not have regulations that actually propose what you’re trying to do, which is to limit the risk. You need to see what the product is and how it behaves first before you are able to say, oh, okay, this is how we build sensible regulation around this entity.

Companies need AI governance and AI guardrails during AI life cycle and AI retirement, to avoid harm to humanity by AI

Joanne Tan 42:36
Well. but the godfather of AI, Geoffrey Hinton, who won the Nobel Prize in Economics, I believe in 2024, he’s a pioneer in artificial intelligence. He has expressed concerns about the potential dangers of AI, including its potential to surpass human intelligence and pose an existential threat to our human survival, and that led him to leave Google and advocate for increased regulation, and he was the one who was talking about initially having a mechanism for large language models of reset or delete or do whatever to interfere, when AI is out of control, is threatening humanity. So what do you think?

Tammy Roust 43:24
I think that is part of the governance and guardrails activities that you have an AI life cycle process, by which you have development of AI, and then you have governance activities, and then you have regulatory activities for something that is more ecosystem wide, and then when the model is no longer performing fit for purpose, or when it’s no longer performing as expected or fit for purpose, then you retire the model. That’s part of governance activities and life cycle management.

Joanne Tan 44:03
Okay, is there any mechanism… you mentioned about retired AI when it stops doing this and when it stops doing that, remember you mentioned that? Okay, can you talk about? Is this a good place…

Tammy Roust 44:14
Sure. So the way you are retiring AI, when it stops being fit for purpose or providing responses within bounds, is that you need to pull it out of production, which means you need to understand what the limitations are of your AI, and what it’s supposed to do. So you need to have something to replace it if you’re going to be pulling it out of production. Otherwise, people will subvert your retirement of this model, and they will find a way to do it. So a retiring, retiring of a model means to pull it out of production. You will no longer be… have access to it. And in order for that to happen, you have to always have a continuous development process by which you have a new, you have a new AI project to replace the functionality in that one. You should always expect that there will be a life cycle where something stops functioning as expected.

Joanne Tan 45:08
So humans must be in the loop and in command and monitoring the AI performances. But …

Tammy Roust 45:15
That’s correct. It’s part of AI governance processes to be reviewing the model outputs and to be ensuring that it’s fit for purpose on a regular schedule, the Fed SR 11-7 Model governance activities around traditional financial risk management are what I’m seeing a lot of the private sector adopting for this. And it’s typically a monthly model risk management committee meeting where you review those outputs and decide when it’s time to retire the model.

Joanne Tan 45:43
Yeah, but in reality, do all the companies do that? Do all the government agencies do that? That is, that is a mindset, that is a preventive, proactive mindset. Just because it should be done doesn’t mean that people do that, until the problem compounds into much larger proportion, unnecessarily.

Tammy Roust 46:08
Certainly, certainly we have seen real life examples where models have stopped performing as expected, both financial risk management models as well as technology models, and people have continued using it to their peril.

Untagged synthetic data pose systemic risks; model collapse; The real danders from AI hallucination

Joanne Tan 46:21
Yes, that’s what I’m worried. Okay, the other issue, as I said, you know about the data shortage, the other… there are two issues I’m concerned about, and the other issue is synthetic data. AI, creates new usages by recycling what data it already used. Okay. What harm can synthetic data have? That reminds me of the Norman Rockwell’s painting about gossip or rumor: after one rumor has been circled 15 times through 15 people, what started was completely different at the end. Now, what’s your observation, and concern, and opinion about synthetic data?

Tammy Roust 47:11
Synthetic data can be harmful because it can lead to systemic risk, particularly if you don’t have the data properly tagged as synthetic data, where it’s all based on the same root data, It’s the data is homogenized, where everybody is really looking at the same thing. And so you’re trying to compare two things that actually came from the same root, root source, and thinking that, Oh, good., the second thing is just like the first, when they both came from the same source, so you think that your model was doing great, because you’re comparing something to itself. And so this is the real danger here, the systemic risk, because they’re both based on the same root data, and that can lead to model collapse. Because again, I remember prior to the financial crisis, when everybody said, Oh, we don’t have to worry about the real estate market, because there’s never been a nationwide collapse in real estate prices since the Great Depression. And I was like, well, then you’re excluding the Great Depression from this, aren’t you? And they were! They were excluding the Great Depression from that. And there was also a lack of generalized data prior to, prior to the 1930s anyway, so it’s not a huge amount of history to be looking at to make these dramatic conclusions. And a lot of these LLMs in particular, based on the same data. And so this is model collapse, where everybody’s model is based on the same data. So you have model collapse because it encounters something it’s never seen before, a nationwide drop in real estate prices, boom. I have no idea how to risk manage this.

Joanne Tan 48:53
Model collapse, as well as hallucination. AI, with all its seeming authority is opinionating, or is opining with certainty, and that is not based on sound database. And when it hallucinate, and people have developed this dependency and trust on AI, and they listen to AI only for their own demise. So that is another danger with AI overall. I wrote a blog from a very amateur’s point of view, because I am not an expert in AI, but I… from a common sense point of view. I wrote a blog two months ago about the need for labeling synthetic data so they don’t get integrated into the non synthetic data. But the problem is, humans do make labeling mistakes. Do you think AI can be leveraged to check and balance AI AND humans?

Need for auto tagging of synthetic data when it is being generated and used; the danger of group think

Tammy Roust 49:56
So first of all, I want to point out that when synthetic data is used, synthetic data is being generated, and so it should be auto tagged as being synthetic.

Joanne Tan 50:07
Yes,

Tammy Roust 50:07
That reduces the likelihood that humans will, will miss-tag the data to begin with. But also, I think we have to be worried about group think, in terms of the models all being based on the same data. Group think, from an AI perspective, of AI is producing the same data because it’s all based on the same root data.

We need to have a human consensus mechanism & AI governance committee to correct AI’s mistakes

Also we need to develop a consensus mechanism where we have multiple humans in the loop, because again, if one human reviews the results of the AI and says, “Oh, this doesn’t make any sense,” it’s very tempting to believe the AI, we all know that this is an inherent bias, that we have to believe the technology rather than the human. It’s, it’s necessary to have multiple humans in the loop and to develop a consensus mechanism for these humans all agree that the AI is wrong. That’s why you have a governance committee. And finally, you use the wisdom of crowds. So if that consensus group agrees that this is what’s actually going on and can explain it in an, in a identifiable way, then that’s how we keep from this danger of being unable to check AI against human input.

Energy supply shortage for AI data centers; making and using efficient small language models for specific uses

Joanne Tan 51:26
Okay. Electricity supply for data centers is another obstacle to AI development. Will nuclear power be revived in time to power the data centers, and will nuclear power be enough?

Tammy Roust 51:44
You know, we’ve had an energy crisis since I was born, and we always seem to have one continuously. There’s no question that LLMs consume a lot of power, and so do humans. To put numbers to this, MIT published a paper that said, that said, a single transformer, LLM can emit as much carbon as 300 round trips between San Francisco… I think a lot of people have heard this: A single LLM can generate as much carbon as somebody taking 300 round trips between San Francisco and New York.

Joanne Tan 52:19
Wow,

Tammy Roust 52:19
In a study that compared energy consumption and carbon emissions for LLM inference, generative tasks like text summarization were found to be more energy and carbon intensive than discriminative or non generative tasks like classifying text, like this is this thing. And methods for creating smaller versions of trained models like model distillation, like I take this eight, this eight factor model and make it a two factor model, or compression, can reduce environmental impacts and inference time, but training and those types of models can still contribute to environmental impacts. And right now, there’s no way to really estimate environmental impacts from generative AI, there’s no agreed upon way.

I’m reminded of the earliest implementation in machine learning, which was about text recognition for converting unstructured texts to structured text. So the original use case seems to have been the Postal Service wanted to convert handwritten zip codes into the little bar code. So they needed to be able to convert handwriting into actual numbers to recognize this handwriting O is an, is a zero.

So the interesting thing is that these big LLMs are taking a lot of compute, and then we’re training them for these specific use cases, instead of starting at the specific use case level and using a small language model and developing AI specific to it. It’s like we’re buying an SUV and then cutting it down to fit the number of passengers, instead of buying a smaller car to begin with, and then being angry about the fact that the SUV doesn’t get great gas mileage. If we used a compact car to start with, with the basic “get me to the grocery store” use case, and we just left the LLM for the heavy lifting about ingest this 5000 pages about the economic European history, and give me a five page summary of what the Hawley Smoot Tariff Act means, then we would reduce the amount of energy required for the task.

Or we could use a small language model that’s specifically trained only on European history, European economic history to start with. It requires more development on our side to do the small language model, though, and AI, talent is scarce.

I don’t think we’ll go back to the nuclear power model, because, to be honest, we’ve been talking about that since the 1990s and there is not a breeder reactor in sight over here. We retired them here in Illinois. We have a lot of options with respect to local energy storage, and increasing energy efficiency, and the reaction of technology providers to re-investing in nuclear seems to be in relation to desires to spin up data centers quickly. So that seems to be the real motivation behind it.

Joanne Tan 55:08
Yeah, well, everybody wants to be the first one. Nobody wants to be left behind, this FOMO, Okay, leads to overspending, over complication.

And that leads to… so could you share your insight and foresight about the knowledge economy? How is it related to the AI revolution?

AI future and the knowledge economy

Tammy Roust 55:28
Believe it or not, the AI revolution will actually result in more shifting to the knowledge economy as it becomes easier to modify structured tasks to be performed by AI. So like that, filling out the form for your benefits, stuff, those tasks can be performed by AI better, and we don’t like to do those. We can be subject matter experts in our domain, more if we’re working on that, than if we’re working on traditional office productivity functions like filling out forms, or maintaining calendars, or sending emails, or things that aren’t as cognitively tasking, taxing or requiring of our subject matter expertise. Tt puts more emphasis on the knowledge economy than less.

Joanne Tan 56:12
That’s an optimistic view that AI will motivate people to be deeper in their subject matters and learning better. But there is argument on the other side, from a pessimist point of view, is that people rely on AI to learn instead of themselves to learn.

Part 3: Tammy Roust’s personal stories of juggling motherhood to 3 young children while advancing careers

So personal stories, that’s Part Three, your personal stories and goals. Here are some personal questions. I’m sure your answers will inspire many people. What are the biggest challenges of juggling motherhood while advancing your careers?

Tammy Roust 56:48
Sheer cognitive load, is the biggest challenge of parenthood while keeping myself up to date with the many hats I wear at work, that’s the biggest challenge. I go for a run, just to give my brain a break from working on all of the different things. And even then, there’s a book, “I Don’t Know How She Does It”, and she’s like, there’s like, a message storm in her head going on all the time. And that’s what my brain feels like. It’s like, oh, hey, we need to register somebody for something, and then somebody needs lunch. And then, oh, I need to write a presentation for the AI Governance Committee and and it’s all at the same time.

Joanne Tan 57:31
Yes, brain overload. How do you manage to get a PhD, and three master’s degrees while advancing your careers and being a mother to three young children? Do you sleep at all? Do you regard yourself as a genius or a superwoman?

Tammy Roust 57:51
Neither. And no, I don’t sleep. I don’t sleep enough. That’s for sure. I’m definitely not a genius, and I have been in the presence of actual geniuses. So I can, I can say that for sure, that I’m not a genius. All my graduate coursework was done before I had kids. If I had to do that now, I could not handle the cognitive load of working and having kids and then doing graduate coursework. That’s for sure. I do… I don’t know how people in a PhD program do it with children, because for a PhD, you really have to go deep on a very particular topic, and it takes hours to get your brain into that mode. And kids require attention on fairly frequent intervals, if not constantly. For my master’s and my PhD, it really helped that I was at a university that was very supportive of working students, and one that allowed me to define my research domain to be interdisciplinary. It was touching IT and economics and math and marketing, and they were perfectly fine with that, and that was great. They supported me to achieve my goals, and I highly recommend Claremont Graduate University for that reason, for working students. My dissertation had all of those different things in it.

Most research universities would really have wanted me to focus on one thing that was more particular to the research that they were doing at the time. I have known people at much more prestigious universities who have never finished because they didn’t get the support of their committee or advisor. I have known people who had to fly to a different city to get their advisor to sign off on their dissertation proposal. Even after months of sending it and sending it and sending it, they had to fly somewhere and sit outside somebody’s office to wait for them to get them to sign it off on it, and even with that support, my graduate work took 10 years, and I created exit options for me. That’s why I have three master’s degrees. So every two years I would collect a master’s degree, and decide, do I really want to keep doing this? This seems kind of crazy, but why not? And at a certain point, it felt like there was still more I wanted to accomplish, so I kept pushing on.

In my case, I was able to find other people in my research domain who could help me with getting data, and that’s essential for a dissertation, and to help me with writing research papers so I could work towards my doctorate while I was working for, for reals, working for pay, and then working for free. And then I got to go to conferences and do industry research. And it was amazing. I got to do consulting all on my dissertation topic. And once I finished my dissertation, I got royalties for like, a pizza year, I got in royalties, basically $25, time to buy my pizza. But now that I have kids, I’ll just stick to certificates and things that take less than 10 years, because I still technically work in the field of my dissertation, which a lot of other people can’t say I worked with data from banking, and I did topic about attrition, about like, the likelihood of people leaving banking, leaving their bank, if they took certain… if they had certain characteristics. And it was a, it was a quantitative artificial intelligence by some definitions model, and I am really thankful that I got to work with this data and then I got to work in the field, where I got to use a lot of the things that I learned.

Joanne Tan 1:01:33
So your dissertation was related to banking industry,

Tammy Roust 1:01:36
correct. My dissertation was related to the banking industry.

Using systemic, integrated AI data to better humanity

Joanne Tan 1:01:39
Okay, so given your unmatched combination of AI governance, data strategies, FinTech, Gen AI, financial market and cybersecurity regulation, and as a NASA engineer, how will you impact a better world in the next 10 to 20 years?

Tammy Roust 1:02:02
So I’ve always maintained that what, a lot of what I do, whether I call it technology or finance or something else, it’s really is to take structured tasks and automate them so people can spend more things, more time on things that, that, that they enjoy, that require deep thought, that they’re being interrupted from that deep thought by some of these structured tasks that computers are better at anyway. That’s what I think I’ll be doing for the next 10 to 20 years.

AI for automation so we can free up human capital to do the things we really care about. In any data project that I’ve ever been on, it’s always 80% of the time is focused on cleaning the data. If we can get that process approved through AI, like we were talking about with that hospital availability data, getting the data quality improved through AI, then we can reduce the time we’re spending doing data cleaning and allow for further analysis, and then meta analysis, and then connecting the meta analyzes from all the different subject domains to get a big picture of what’s really going on.

So one example is I went, I go to my doctor, and they always tell me that I didn’t have my annual shot. And I always tell them, Yes, I did. It was on this date at my local pharmacy. And then the local pharmacy has a data share with my doctor’s office and the public health system, but the data share with my doctor’s office and the local pharmacy is only on prescriptions. It’s not on shot records. And then the public health system data doesn’t really send data to the doctor’s office, so it’s inevitable that I’m answering the same question the same time every year when I go because there’s some breakdown in the system that I can’t find, but AI can find it. So identity resolution is an AI function. You don’t want to know this, but in reality, I can identify with 87% likelihood who someone is based on their birth date, gender and zip code. So how about instead, the nurse’s assistant gets a message from public health saying the nurse’s assistant for my doctor gets a message saying that, hey, we have a likely hit for your patient from the public health database. Can you review and confirm, or from the pharmacy, for that matter, because, again, the pharmacy knows who I am. The doctor’s office knows who I am. They both can identify me using an AI agent, and say, hey, this person looks like they got a shot. Is this person the same person that you know of? And if it’s yes, then do the data handshake and just transfer the data. Or can you forward that message to the doctor’s record? And they can ask me, Hey, I don’t see a record you had your annual shot, but it looks like it might have been on X date on Y pharmacy does that match your records? And if the answer is yes, please change my record. As opposed to, every time I go to my doctor, her asking me, have you had your annual shot? And me telling her, and her putting it, typing it in the little the record. But it doesn’t wind up in the shot record. So every time she reviews the shot record, she comes in and asks me the same question.

And that seems like a little thing, right? But it makes a big difference in public health, because the number of people we know who have had their annual shot impacts where we direct our resources, if there’s an outbreak. We use public resources, that’s taxpayer money. We want to use that so effectively and efficiently. We don’t want to direct it to where it doesn’t need to go. So we want to be able to say, this is the population that is not gotten their annual shot. This is where we need to send the shot van.

Joanne Tan 1:05:36
So you want to see a future that you will impact with systemic, integrated AI data for consumers,

Tammy Roust 1:05:46
yes,

Joanne Tan 1:05:47
for FinTech, that’s how you’re going to impact life in the next 10 to 20 years,

Tammy Roust 1:05:52
Using technology to help people live better. Yeah,

Curiosity, Care, Courage, Considerate, Tenacious, and Persevering

Joanne Tan 1:05:56
That’s wonderful. Last but not least, I ask all my honored guests for Interviews of Notables and Influencers this question, what does your brand stand for?

Tammy Roust 1:06:10
My brand stands for curiosity, care, courage, and consideration. To be considerate, to care for others and to have curiosity and all of the things you do.

Joanne Tan 1:06:23
And tenacity, persevere,

Tammy Roust 1:06:28
To persevere. Yes, My brand is also about being persistent and persevering through any difficulties to achieve the ultimate goal.

Joanne Tan 1:06:38
Yes, that’s proven by amazing accomplishments you have so far accomplished, and I don’t know what, what else you can accomplish down the road. I mean, you are still very young.

Tammy Roust 1:06:53
It doesn’t feel that way most days.

Joanne Tan 1:06:57
You got a lot to give to humanity with all the diverse backgrounds, and experience, and in depth knowledge. So it has been a tremendous honor at this Interview of Notables and Influencers, and I truly wish you, from the bottom of my heart, the best for your career, for your kids, for yourself, for your happiness.

Tammy Roust 1:07:19
Thank you. It’s been a pleasure.

Joanne Tan 1:07:21
Thank you.

Tammy Roust 1:07:22
Thank you very much. Joanne, have a good weekend.

Joanne Tan 1:07:25
Okay, bye,

Tammy Roust 1:07:26
bye, bye.

©Joanne Z. Tan  all rights reserved.


 

Please don’t forget to like it, comment, or better, SHARE IT WITH OTHERS! 

– To stay in the loop, subscribe to our Newsletter

– Download free Ebook

Please don’t forget to like it, comment, or better, SHARE IT WITH OTHERS, – they will be grateful!

(About 10 Plus Brand: In addition to the “whole 10 yards” of brand building, digital marketing, and content creation for business and personal brands. To contact us: 1-888-288-4533.)

– Visit our Websites:

https://10plusbrand.com/

https://10plusprofile.com/

Phone: 888-288-4533

– Find us online by clicking or follow these hashtags:

#10PlusBrand

#10PlusPodcast

#JoanneZTan

#10PlusInterviews 

#BrandDNA

#BeYourOwnBrand

#StandForSomething

#SuperBowlTVCommercials

#PoemsbyJoanneTan

#GenuineVideo

#AIXD

#AI Experience Design

#theSecondRenaissance

#2ndRenaissance

Leave a comment

Your email address will not be published. Required fields are marked *

Decode | Create | Amplify