Home » 2019 » February

Monthly Archives: February 2019

Team 7 Week 7 Progress Report

Questions for This Week

  • MVP Applications: Is our tagging system effectively capturing key engagement details? Is our tagging input system intuitive? What information would beneficiaries submit using our form?
  • Unit Dynamic: How can we ensure that R&D does not feel they are taking a major risk submitting their data to our system? What boundaries are necessary for each of our beneficiaries to accomplish their goals in this system? How do we achieve sufficient buy in across the unit to implement the system?
  • Dual Use: Can people from other agencies understand the intent of our MVP? Can they envision how they might use it in their context, even if it’s currently built with USASOC’s workflow in mind?

What We Are Testing

  • Our manual input form is intuitive and easy to use.
  • Our manual input form only asks for relevant information.
  • Our manual input form allows for the submission of all relevant information.
  • Individuals who are highly concerned about CDD interference will not share any information about their projects.
  • If the squadrons and CDD buy in to the project, apathy from higher ups will not be a problem when it comes to implementation.
  • Beneficiaries seeking out a project have enough context to work with limited information. [condensing search output]
  • Search and input MVPs are relevant to different kinds of workflows.

Key Insights

Our most important feedback this week was that our input MVP seemed to have the flexibility to cover a range of reporting that we did not anticipate. Our contacts in AFSOC and JSOC both cited ways they could see themselves using it despite that the MVP was not designed with their workflow in mind. This makes us more confident that we are striking the right balance between having some operational guidelines while providing considerable flexibility to submit whatever information is relevant. It also suggests that such a program could be functional across units without drastic changes.

Culturally, we learned from Nic in R&D that the core communication issue isn’t necessarily that people’s projects will get shut down. The real issue, in his opinion, is that people don’t feel like they’re being heard when they do talk to CDD and therefore don’t really see an upside to reaching out. Shutting down projects might not be a huge problem, but it’s still a downside, and the downside is what people will think about if they don’t perceive an upside. In theory this perspective fits with some of the prior testimony we have heard, but we would like to hear more about it from those who previously expressed concerns about CDD interference.

Interviews (10/82)

SGM Angel – Future Concepts

  • Even among people who suspect CDD will try to shut down their projects, there is usually a level of information they are willing to share. Can’t speak to specifics for every individual.
  • Affirmed the value of reaching out to Subject Matter Experts after submitting a proposal/requirement. It can be difficult to know if you have all the relevant information just from looking at a report, so the ability to check in is important.
  • People can currently check the status of their pre-validation requirements via DecisionTracker. Most people hate it because it’s such a pain to access through the portal.
  • Ideas get triaged all the time, particularly at the Requirements Review Board level. People need to accept that some ideas will not move forward.


  • [on search MVP] It would be nice to somehow stitch together pertinent information from multiple site visits/trip reports so the customer could have a consolidated picture of past convos.
  • A streaming comments section would help as well to make updates easy.
  • Easy add: recency filter. We should be able to cut out stuff that doesn’t matter anymore.
  • Harder add: relevancy filter; “Decision X has been made which renders Y information obsolete.”
  • Wants to know how old information gets scrubbed so that the search function stays clean.

Brian – Signal Squadron

  • [on input MVP] I was just on a trip that I would need to fill one of these out for.
  • Thinks the types of inputs we are requesting would be easily searchable.
  • Did not as easily envision the exact process by which the form itself would be submitted.

Doug Brook – Professor, Fmr. Asst Secretary of Navy

  • [Inquired about the issue of siloing at the DoD level]
  • The Pentagon has uncountable silos.
  • Silos can be organizational, functional, or operational.
  • As such, it’s difficult to provide an illustrative example of exactly how these divisions play out. Rather, it’s more accurate to say that each individual views whatever topic is under discussion through the narrow lens of their job.
  • Leaders should theoretically bridge divide but there are many ways to hamper initiatives that don’t fit your interests.

MSG Matt – C4I

  • [on search MVP] “Very [useful], I would likely use this at the outset of every project.”
  • When researching, would use MVP to get a specific vendor’s history; any past interactions, any notes on performance or capabilities.
  • Would also want to use it to get a broad overview of what we’re doing in a given technology area.
  • Helpful deconfliction tool.

MAJ Mike – CAG in support of JSOC

  • Suggested we look into Project Vulcan.
  • More limited in scope than what we are doing as it mainly solicits information from vendors
  • This functionality alone was enough to generate some buzz with his people; he found out about this program by chance through one of his teams.

Nic – Squadron R&D Deployment

  • The whole building is visual, you have to be careful about adding too many layers to this thing.
  • Doesn’t necessarily care about the timeline of when his requirements get validated. Just wants to know when the tech he wants will be in his hands.
  • One element of the CDD/R&D issue is trust. People feel like they’re not heard when they do speak up, so there isn’t a lot of upside to sharing. Easier to think about the negatives.
  • End users need to be able to focus on the mission. We either need to add non-operational people to the squadrons to focus on this stuff or have CDD do it.

Rob – CDD Operations

  • Currently uses a computer system to submit RFOs.
  • Would not intuitively know enough about his colleagues trips to fill in the gaps if a trip report wasn’t clear on, for example, whether they were visiting a particular vendor.
  • All vendor related trips can be shared organization-wide. He does not suspect anyone on his team would object if he decided to share some of their reports.

CAPT Nick – Kessel Run

  • Air Force going through similar issues as Army. Will deploy products 3-5 years after everyone stopped asking for them.
  • Decisionmaking in the military is most frequently top down and this causes problems. Higher ups make decisions based on their experiences, which can be miles removed from the situation on the ground today.
  • Believes that buy in from higher ups is still necessary but can be attained if you show the value the program has for subordinates. Everyone wants to look like they’re doing a good job, so is there a way that supporting the program will help him do that?

MAJ Mike – CAG in support of JSOC (MVP feedback)

  • Liked the balance of available tagging options and flexibility to add new ones in input MVP.
  • Would expect to see all engagements go through this system.
  • Key info: who, what, where, when, why, and how to proceed.
  • Also helps to know who set it up, the original purpose of the meeting, and whether the outcome aligns with the original purpose.
  • Would be nice to be able to search the content of the trip reports.

Hypotheses Confirmed/Debunked

  • Confirmed: Our manual input form is intuitive and easy to use. Most users seemed to understand the purpose of each tag.
  • Partially Confirmed: Our manual input form only asks for relevant information. Nobody pointed out a particular tag as being irrelevant. However, nobody would use every tag on the list every time, so it’s hard to speak to what the unit as a whole would find useful.
  • Partially Confirmed: Our manual input form allows for the submission of all relevant information. It seems the flexibility of the system empowers people to think about how they’d submit whatever form they’re thinking about. Our Air Force contacts discussed using it to submit forms that we were unaware of.
  • Debunked: Individuals who are highly concerned about CDD interference will not share any information about their projects. There is some division as to exactly why some people are so worried, but this may be tied to the amount of information shared rather than whether it’s shared at all.
  • Partially Confirmed: If the squadrons and CDD buy in to the project, apathy from higher ups will not be a problem when it comes to implementation. We will not know until we actually try. However, interviewees indicated the value of buy in from the ground up when getting a higher up’s approval.
  • Debunked: Beneficiaries seeking out a project have enough context to work with limited information. [condensing search output] We found instances where people wouldn’t know things we would expect them to contextually know. However, people are also OK with getting a good contact an reaching out.
  • Confirmed: Search and input MVPs are relevant to different kinds of workflows. Its relevance to our contacts from outside USASOC indicates it has a degree of flexibility.

Questions for Next Week

Is our input system more convenient than whatever people are doing now?

How many forms could our input system replace?

What amount of limited information is even the most paranoid R&D employee willing to share freely?

How can the search interface be condensed effectively? What kind of info should a condensed version display?

What value can CDD deliver that makes communicating with them worth the risk of getting a project shut down?

How can supporting our product benefit higher ranking people at USASOC, even if they don’t intend to use it?

Team 7 Week 6 Progress Report

Questions for This Week

  • MVP applications: Given the desire to find a point of contact through our system and assuming we can provide this point of contact, what other information would be helpful to see right away?
  • Unit Dynamic: In what instances has CDD ‘overstepped its bounds?’ Does R&D’s recollection of these actions align with CDD’s justificaton? Is CDD capable of providing some kind of assurance that it won’t directly interfere with R&D efforts?

What We Are Testing

  • When using our MVP, beneficiaries can envision the type of information they would look for.
  • Our MVP correctly displays the types of information our beneficiaries want.
  • Our MVP currently displays relevant information in the most easily digestible form.
  • If there were zero technical challenges to communication, some beneficiaries may choose not to communicate regardless.

Key Insights

Our most important feedback this week was that we are on the right track with our general idea. Our potential users and our problem sponsor were consistently able to describe ways that they see themselves using this system once it is set up. With that level of confidence in the core concept, we are now more at liberty to sort out the details that will allow us to consistently display the right information at the right time. At this stage, we were presented with two interesting challenges: how to contend with trip reports that cover multiple vendor engagements and how to track the problems themselves rather than just R&D efforts. We will need to better wrap our heads around those two issues before we decide what is viable.

We also gained some valuable insight on some of the perceived cultural problems we’ve explored for the last two weeks. In particular, Angel clued us in to the historical context of why and how CDD might shut down a given project and how that perception still creates challenges even though they’ve worked hard to change their behavior. Given the extent of testimony on the subject since our base visit, we are now in a good position to discuss what kinds of boundaries would minimize a beneficiary’s reluctance to contribute to a central database.

Although it was not our focus this week, our interview with Tim, who built a similar system for the White House, yielded an important insight into encouraging proper data entry. In his system, he created a field where people could list a proposed follow up date, then on that date he would go and ask if that person followed up. This let people know that someone was paying attention to what they were saying and made them more likely to do it properly. Consistent behavior on one end encourages consistent behavior on the other.

Interviews (10/72)

SGM Angel – CDD – Future Concepts

  • R&D people look for solutions on their own because they can do it faster than CDD, even though CDD is faster than the rest of the army, and because they tend to be among the few with core competencies in the areas they’re looking at.
  • The problem with this is that R&D people don’t always understand the acquisition rules, and CDD doesn’t always have insight into the problem R&D is working on, which can lead to potential rule breaking.
  • CDD had a historical tendency to shut people’s projects down when it found out about this kind of rule breaking rather than communicating the problem. This is changing but the perception still exists.
  • Believes that for people to freely share that info, they need to feel safe that they’ll be able to work on the projects they’ve determined to be important.

Dr. Jared – Course Adviser

  • In our class some people ended up with a prototype because there’s an obvious need and can build something without actually working on their data.
  • Sometimes the answer to the problem isn’t necessarily a product, but a process recommendation.
  • They kept working on the project after, particularly on the technical approaches. They found it was technically possible but were never able to demonstrate at scale. He’s working on some of the same issues today.

MAJ Mike – CAG in Support of JSOC

  • I understand that USASOC is more interested in vendors and products but I’d want to track all engagements. Anytime anyone has a meeting with someone, particularly at another government agency.
  • With that in mind, I’d recommend making ‘trip reports’ broad enough to include information about any kind of interaction.
  • You will need to contend with the fact that sometimes people have multiple engagements on one trip and make it easy to distinguish between different vendors/agencies/POCs. Otherwise you’ll get data about a few vendors listed under a single vendor.
  • Setting aside the specific things I’m interested in, the features themselves seem like something I would use. I’d maybe want the email subscription to do some sorting for me.

COL Mike – CDD Director

  • When I talked about scraping data from publicly available proposals, the main thing I was interested in is landing on the right person to talk to. I want to easily know what’s out there and can basically do that now, but it’s harder to find the right point of contact to learn more.
  • What I imagine is being able to connect our problem statements to a database query and see what comes back.
  • Even on projects that we already decided to spend money, it’s hard to get the people who pushed for those projects to follow up. That’s because deployment might be a couple years away and they have new problems to deal with.

Rob – CDD Operations (MVP Update)

  • Liked the basic idea of the mockup and felt he could ID ongoing R&D efforts from it.
  • However, would like to track R&D efforts “before they become R&D efforts.”
  • In a perfect world, he could track problems as they arise, before they turn into R&D efforts.

CW3 Steve – Network Operations

  • Confirmed that USASOC’s networks could support a python web server.
  • Requested specifications on memory, storage requirements, etc. prior to implementation.
  • They’ll also need some version information so their cybersecurity team can look everything over.
  • Once the client is installed he does not foresee any problems.

Tim – Defense Digital Services

  • Managed similar problem on White House networks, which wanted to have more modern forms of communication while complying with Presidential Records Act.
  • White House also wanted to track tech engagements, had many instances of duplicated trips.
  • The consequence of spending so much money on those engagements is they had to be more choosy about which tech they landed on, which is a problem when dealing with stuff that’s going to become obsolete pretty quickly.
  • Key implementation tactic: included a field for proposed follow up date, and actually checked in with people to see if they followed up on the promised date. This convinced people that the data they are entering is actually being used.
  • Key info for me: who reported it, date submitted, issues and comments, which other issues it’s linked to.

Will – Defense Digital Services

  • Our ultimate goal is to put in place the right technologies to mitigate the risks from our adversaries in any possible situation.
  • The tech itself is easy to find, the challenge is applying it so that it works for us, then getting it deployed.
  • Our team of about 45 consists of engineers, product managers, and ‘bureaucracy hackers.’ The engineers figure out how to make stuff work, the product managers figure out what combination of things we want to use, and the bureaucracy hackers make sure we’re compliant with the law.


  • Confirmed that our MVP appears to be attempting to solve the correct problem.
  • Suggested that civilian companies may have similar issues.
  • Increasingly concerned about the input side – people will absolutely have to input data for the system to work but people are lazy.

Nic Vandre – Squadron R&D Deployment

  • Described himself as an end user in the field, saying he needs a capability. His role is to take new tech and figure out how it will work at the warfighter edge.
  • He gets his tech from two places: whatever CDD has for him and whatever he can find in industry.
  • He won’t tell anyone about his projects until he is ready to submit a requirement. Sees the unit as highly autonomous and low on vetting; one person submitting requirement can hold a lot of sway. He primarily vets ideas with his squadron.
  • Considers his industry research to be supplemental to what he sees as CDD’s job. Believes CDD should be on top of what industry is offering and keeping him posted on what they’re going to send.
  • In general would like a better way to track how CDD is responding to his needs and whether they intend to follow up.

Hypotheses Confirmed/Debunked

  • Confirmed: “When using our MVP, beneficiaries can envision the type of information they would look for.” Beneficiaries seem to have a general sense of what they are looking for to solve their problems when they look at the system. Some people are looking to associate a specific name with a given engagement so they can follow up in person. Others want to see what recent activity is going on around a specific type of capability to try to anticipate needs that will take up some portion of the budget. Others want to make sure nobody is working with the same vendor as them or on the same capability as them.
  • Partially Confirmed: “Our MVP correctly displays the types of information our beneficiaries want.” Generally speaking, the information that is already on display in the MVP is seen as relevant. However, some people worried that our system wouldn’t do a great job accommodating the existing habit of reporting on multiple engagements in a single trip report and would want to know that the system is attempting to address that on some level. We also heard from multiple people that they’d like to be able to track *problems* and not just what capabilities people are looking into. Rob in particular felt that it would help him in his job if he could see the problem before R&D undertakes any efforts to address it.
  • Debunked: “Our MVP currently displays relevant information in the most easily digestible form.” This hypothesis was primarily debunked with the help of our problem sponsor, who gave us a better sense of the volume of entries he’d expect on such a system. In essence, our existing method of displaying the text of the trip report would become overwhelming if people start doing searches that yield hundreds of results. Would instead like to see an easy to scroll through list that he can expand/collapse as needed.
  • Confirmed: “If there were zero technical challenges to communication, some beneficiaries may choose not to communicate regardless.” Discussions with Angel and our problem sponsor confirmed that there can be a real fear among some people submitting their projects. Usually because they think their project is the most important thing, and the last thing they want is some higher up who doesn’t know any better coming in and making changes or even shutting it down. Joe suggests there is a gap between fear and reality.
  • Partially Debunked: “CDD goes around shutting down people’s projects without knowing what they’re doing.” Joe offered some clarification on this front, suggesting that the typical issue when something needs to be shut down (maybe just temporarily) has to do with complying with acquisition law. Angel supported that assertion in her call but said historically CDD maybe did have a problem with identifying a rule breaker and just shutting the thing down without communicating the problem and trying to have a discussion. She has put in significant work in the last 10 years changing that behavior but thinks that the perception is much harder to shake, even though a lot of the R&D people working now weren’t around back then.
  • Confirmed: “Government agencies similar to USASOC could repurpose our MVP to meet their specific needs.” Tony and Mike both suggested that our MVP was rather vendor-centric given their needs, but saw no reason why the engagements they are interested in couldn’t be included on a more generalized form as long as people are willing to report it. In essence, they saw themselves as similar enough to make this kind of solution viable.

Questions for Next Week

  • MVP Applications: Is our tagging system effectively capturing key engagement details? Are we pulling data from the right places? What kinds of manual input do we expect and how will they do it? Is there a simple way for this application to track “problems” rather than just capabilities?
  • Unit Dynamic: How can we ensure that R&D does not feel they are taking a major risk submitting their data to our system? What boundaries are necessary for each of our beneficiaries to accomplish their goals in this system?
  • Dual Use: Can we get anyone from private industry to talk with us directly about this issue? Can we generate signs of interest on our own?

Team 7 Week 5 Progress Report

Interviews (10/62)

MSG Tom – Future Concepts – Subject Matter Expert

  • Not sure what his most common search terms would be to find people who visited a given vendor or worked on a given capability.
  • The information that eludes him, and that he’s most interested in, is identifying previous colleagues that had visited a specific partner, whether that’s another government agency, lab, or vendor.
  • Would also like to know about what any previous colleagues had discussed but would rather find out with a quick phone call than by going through piles of documents. Going through piles of documents is what he does now and he’s never confident that he has all the information he needs.

Rob – CDD Operations

  • If he’s going to search for something, it will probably be by vendor, category of equipment, and/or timeframe.
  • It can be helpful, for example, to know who talked to a given vendor in the last 6 months. He should be able to easily distinguish the timeframe he cares about from no longer relevant entries.
  • The categorical search helps when he wants to know what vendors people are talking to about a given type of product.


  • Emphasized the value of a system that is both simple and able to put information right in front of people’s faces. Felt our email subscription solution checked those boxes.
  • Wished that it could somehow use the data collected to show who is due to provide a trip report. Noted that similar products are briefed all the time in the military and can be a good motivator. Did not provide an example of a time when such a system proved effective.
  • It would help to have some basic templating to help folks who are totally unsure what to say to write something. Stuff like date, participants, products, conversation, and the ability to attach a file.

John – Squadron R&D Specialist

  • Has experienced both visiting a vendor only to find out that someone else had already made contact and finding out that someone visited a vendor he was already working with.
  • Planning on a trip next week – would absolutely search a database to see if anyone else made the same trip. Trips cost time and money.
  • Believes problem can extend beyond duplicated and is reflected across other government agencies – individuals can spend years working on a capability only to find out that another agency already developed it.
  • Warned of a slight distinction between how civilian employees apply to go on a trip compared to enlisted employees. Each form includes the same basic information but may require looking in different places.

MSG Darrell – R&D Section Leader

  • Estimated he spends about 10-20% of his time interacting with CDD but it depends on the project. For some it’s 100% involvement, for others they might not need any.
  • Value’s CDD’s understanding of the acquisition process; suggests they may sometimes overstep their bounds in shops that are not part of the units core competencies (which he considers to be the more traditional development teams like such as guns, breaching, etc.).
  • Suggested a willingness to share information about his projects at an early stage with two caveats. First, this already happens among the more tech savvy people and CDD is not often a part of it due to lack of right personnel or passion. Second, people in his position would worry about CDD using such a platform as a way to wield oversight over them and shut projects down.

Brian – R&D Section Leader

  • Noted that his team primarily receives new capabilities and has to figure out how to make them work with all the existing systems.
  • His problem: with the increasingly complex layers of networks he has to work with, it becomes increasingly difficult to cope when he suddenly finds out he needs to implement a capability that’s been in development for years.
  • He wants to know anything new happening with ATAC (their primary software interface). For example, when someone uses or integrates a program in a new way, right now they’ll write up an AAR saying what they did, what they’re going to keep doing, and what they need to improve on. He doesn’t know where they go. Once they get emailed if he doesn’t see them right away they might as well go in the trash.
  • What he would REALLY love is to be able to put the content of webpages onto a program where people could comment with their work related details. He follows a number of national labs and believes they provide the best info about capabilities, but it’s extraordinarily difficult to get that type of information on the classified network where they do most of their work.
  • Indicated that he tries to keep CDD in the loop on his projects and generally sees CDD as helpful in making the money work for various projects.

MSG Tom – Future Concepts – Subject Matter Expert (MVP Feedback)

  • Liked that all the data would go to one place, but noted that his problem isn’t finding data, it’s finding the most relevant data. He can go through a squadron’s trip reports be he has no way of telling which one is relevant to his needs.
  • Clarified that he wants to be able to call people because he’s also interested in knowing what was left off the trip report. Finding the people who talked to a specific vendor or agency is a 10/10 score in his book. If he gets a name, he can make a call and know what he needs to know in five minutes.
  • Sees further value in attempting to categorize the capabilities they are working on and search by initiative type. Sometimes teams with completely different specializations wind up working with the same vendor.
  • Values the ability to find no results and as a result say with reasonable confidence that nobody else is working on a similar capability.

Rob – CDD Operations (MVP Feedback)

  • Only cares about 4 categories of trips: visits to vendors, visits to trade shows, visits to demonstrations, and visits to test capabilities.
  • The majority of trip reports are useless to him and he does not want to have to sort through them to find relevant ones.
  • Would want the ability to do some level of filtering by type of trip in order to use such a program.


  • Unit has 12 full time staffers and as many as 100 people working on different platforms nebulously working as part of organization.
  • Thinks traditional KM solutions like Salesforce need an admin to be sustainable.
  • Primary communication challenge is uncertainty over who to connect new entrepreneurs to within organization.
  • Suggested the problem in USASOC is also present in AFWERX, DIUx, and Cyberworx. Believes AFWERX would take notice if one of those organizations could successfully address it.


  • His job right now is essentially to solve our sponsor’s problem on a small scale within his team at AFWERX. Believes the next great challenge is finding a way to scale tech scouting effectively.
  • They are currently working on the problem from a user adoption side and from the side of pulling data they want not just within the unit but across DoD.
  • In a perfect world they could see all the thing in the pipeline not just within their unit but across other agencies and the commercial sector.
  • Primarily believes this is an issue in the burgeoning venture capital space within the government (AFWERX seems to function more like a VC group than USASOC).

Hypotheses Confirmed/Debunked:

  • Confirmed: Beneficiaries within R&D and CDD want to be able to easily discover and contact people who are working on similar projects. Respondents suggested they already take time to do this and would gladly use a system that shortened the process.
  • Debunked: Beneficiaries know exactly how they would search for technology relevant to them. Only one respondent suggested a specific type of capability he would subscribe to. Another suggested that the challenges associated with searching undermines his confidence that he has the best information available.
  • Debunked: If all relevant data is in a central location, people will more easily be able to find it. Tom’s testimony suggests that if the tool cannot reasonably parse through a large volume of data and identify the most relevant results to him, it wouldn’t necessarily matter that he was able to do it all in one place. He still wouldn’t have confidence in his findings.
  • Confirmed: If we build a program that benefits USASOC, that program can provide similar benefits to other tech-centric agenci
  • Confirmed: The gap between R&D and CDD is more than a failure to communicate. After weeks of reading between the lines, we finally heard some testimony indicating that some parts of R&D see CDD as intrusive and potentially threatening to their projects. Now that we have some evidence that this tension exists, we will need to better understand each side’s perspective so we can better account for it implementing any solution.

Questions for Next Week

  • MVP applications: Given the desire to find a point of contact through our system and assuming we can provide this point of contact, what other information would be helpful to see right away?
  • Unit Dynamic: In what instances has CDD ‘overstepped its bounds?’ Does R&D’s recollection of these actions align with CDD’s justificaton? Is CDD capable of providing some kind of assurance that it won’t directly interfere with R&D efforts?
  • Dual Use: Are other tech-centric government agencies similar enough to USASOC to effectively implement a USASOC solution? Is its value broad enough to generalize to government as a whole? To commercial industry?

Team 7 Week 4 Progress Report

Interviews (10/52)

MSG Edward – Network Operations

  • Available databases include: rudimentary Request for Orders forms, inventory management, CDD purchases (buy try and procurements), O&M purchases.
  • Every penny of government money has a receipt. There is a document somewhere proving someone spent that money.
  • O&M Log Form request goes into a database as soon as you start the process, before it is approved. Currently housed in Microsoft CRM but working on a way to integrate with financial system.

Cory – Deputy Dispersing Officer – Finance Office

  • How travel works once RFO is filed: First, organization signs off on travel. Second, comptrollers assign account lines to rfo order with an estimated $ amount for trip. Third, HR creates the document (1610 form) of what you’re allowed to do/where you’re going. Many people don’t take those orders with them.
  • You go to the location, do your thing, come back, use your 1610 voucher to fill out a 1351-2 voucher. That includes itinerary. Someone reviews and validates that, then it comes to finance office with your receipts and your orders.
  • Plugged into system that automatically tabulates how much you are owed for the trip.
  • Information shared on RFOs and 1351-2s do not necessarily include all info about trip.

Ed – Network Operations – Travel Voucher Portal

  • Process handled on system called IATS. The owners will not share data necessary for integration and they can’t get another program due to red tape and cost.
  • IATS is also a standalone server in the building, part of a system used throughout DoD. It can communicate with DFAS but will be hard to connect with otherwise.
  • Unclear whether the primary obstacle is DFAS unwillingness to share or lack of cooperation from vendor. However, IATS was apparently an off the shelf solution (fascinating in its own right) and we suspect that this could be causing complications with current attempts at customization.

Tim – Higher Echelon

  • Salesforce intent on building high level reports from data aggregates.
  • Focus on capabilities rather than value provided.
  • Strategic focus suggests that an entirely off the shelf solution isn’t feasible.

Natalya – Salesforce – Developer

  • Uncertain how long it would take to integrate Salesforce UI into USASOC networks.
  • Uncertain which data sources Salesforce would use to generate reports.

Sean – Salesforce

  • Demoed potential product for us.
  • Emphasized ability to track money spent with a vendor over time – unclear whether it could distinguish funding streams without additional input
  • Lots of features packed into a sales oriented interface. Lots of little potential solutions that can be overwhelming when taken in all at once.

MSG Joe – C4I – Commodity Area SME

  • Highly invested in finding a more efficient way to track projects – the time he spends trying to find people is time he could be spending looking into the tech.
  • Insists that despite the idiosyncrasies between different teams, the core organizational workflows are well defined.
  • Strong believer in leveraging existing data flows, as opposed to requiring new forms of data entry.

Gabe – Squadron Ops – Built an Engagement Tracker

  • Confirmed interest in tracking “RDT&E-like activities” and that it would aid in the deployment process.
  • Engagement tracker worked initially but fell out of use as people left.
  • Believes there were three key problems with the program: maintaining awareness of its existence, the presence of a learning curve to use it, and the inability to turn it into an institutional behavior rather than personality reliant.
  • That’s the main problem, we sometimes get the right personalities to deal with this stuff on a human level but no good institutional method.
  • Noted that there was also a compartmentalized list of people who could see, use the program.

COL Mike – CDD Director

  • Noted something that everyone has to do when they’re filling out a requirement – justify that there are no comparable solutions currently out there. It takes effort to do it right, most people just call a contact or two. But if they could make an effective case easily they probably would.
  • Greater challenge than tracking vendors is which agencies are actively contracting with them. From my position I want to know if DARPA is working on the same thing as me. Some vendors are unscrupulous and will charge 3 different agencies for the same product without telling them what the other is doing, this at least gives us some leverage.
  • Unfortunately, particularly with R&D, there isn’t a ton of incentive to share the program. R&D folks aren’t rewarded based on deployment and worry their funding will get cut if they are found to be redundant.
  • If I want to see what industry is doing I can look up BAAs or SBIR initiatives and see who responded. It is much harder to find out who talked to a given company even though that data is publicly available.
  • We have tried a number of CRM solutions to solve the issue of communication within the unit but it’s hard to get buy in. Would probably be more helpful to scrape that stuff automatically even if it’s not perfect.
  • Sometimes people will misrepresent what they’re doing on an RFO because they don’t want you to know who they’re talking to. However, most people are just lazy.
  • Everyone has to contact the vendor at some point if they’re gonna work with them, and probably file a request to visit, bring them on base, or even talk on the phone. There is some record of it at some point, even if it’s just an email in someone’s inbox.


  • Better clarified why informal knowledge of potential procurements would help – the budget is planned by the start of the fiscal year and anyone who comes in with new requirements after that screws up the process.
  • Even worse, sometimes the way they develop the project with vendors means we have no choice but to find more money.
  • The types of purchases that go through O&M aren’t necessarily the biggest threat to fly under the radar in this regard. We have some ways of tracking activity that involves an O&M request. Sometimes R&D-like guys get free samples that don’t require any kind of O&M form and decide they want to buy it, realize that they can’t do it with a log request and then take it up to procurement way later than they should have.
  • We literally wouldn’t mind if people came and talked to us before they even started on a project, but there’s no forcing mechanism to get people to give us a heads up as early as possible.

Hypotheses Confirmed/Debunked

Debunked: “There is no mandatory, standardized, centrally deposited form outside of the requirements process that could indicate a future requirement.” While we brainstormed ways that less uniform methods of reporting such as trip reports and AARs could be leveraged to better track projects, we worried that we did not have a great way to account for the ‘failure to report’ problem due to the existing culture around these nonmandatory reports. A few beneficiaries at our base meeting and MSG Edward confirmed that all O&M purchases have a paper trail that starts prior to the release of funding. Properly tracking O&M expenditures to better predict potential procurements presents its own set of challenges but also an opportunity we didn’t previously think we had. We also had our original impression of RFOs challenged by Rob, who said 99.9% of the time they’ll get filled out before a trip happens.

Debunked: “All data at USASOC is readily accessible if you know where to look.” Database accessibility can depend on which networks it exists on and if it feeds back to another agency such as DFAS. It could get more difficult if we need to be able to interface with those systems at all.

Confirmed: “Most CDD budget experts can use informal knowledge of a project to appropriately plan its funding.” The real challenge comes after the budget is set because the money everyone is aware of has already been spent.

Confirmed (CDD side): “CDD budget experts believe that knowing about a project in advance will speed up the deployment process.” If a project gets reported early enough and doesn’t get funded, it’s probably not that high of a priority. So for CDD it clearly speeds the process up. Less clear is whether the time difference is meaningful to anyone outside CDD. COL Mike added that it increases the likelihood that someone at CDD can find a source of funding at another agency as well.

Somewhat confirmed: “This advance knowledge is valuable enough that CDD budget experts would seek it out if it were readily available.” It is clear from our conversations that people at CDD will go to great lengths to find this knowledge. However, that willingness does not seem to translate to a collective effort to get this information onto one of the many solutions they have tried. It seems something is blocking them from getting scaled across CDD.

Next Week:

– Do R&D cell leaders believe that CDD having advance knowledge of their projects will lead to faster deployment?

– If R&D cell leaders believed that providing CDD advance notice of projects would improve their deployment timeline, why don’t they always do it?

– Do R&D leaders associate any risks with sharing information about their project early in the process?

– If R&D leaders could be convinced that early reporting would speed up the CDD timeline, would they see value in that?

What level of detail is mandatory on RFOs? What data can’t consistently be collected from that form?