Questions for This Week
- MVP applications: Given the desire to find a point of contact through our system and assuming we can provide this point of contact, what other information would be helpful to see right away?
- Unit Dynamic: In what instances has CDD ‘overstepped its bounds?’ Does R&D’s recollection of these actions align with CDD’s justificaton? Is CDD capable of providing some kind of assurance that it won’t directly interfere with R&D efforts?
What We Are Testing
- When using our MVP, beneficiaries can envision the type of information they would look for.
- Our MVP correctly displays the types of information our beneficiaries want.
- Our MVP currently displays relevant information in the most easily digestible form.
- If there were zero technical challenges to communication, some beneficiaries may choose not to communicate regardless.
Key Insights
Our most important feedback this week was that we are on the right track with our general idea. Our potential users and our problem sponsor were consistently able to describe ways that they see themselves using this system once it is set up. With that level of confidence in the core concept, we are now more at liberty to sort out the details that will allow us to consistently display the right information at the right time. At this stage, we were presented with two interesting challenges: how to contend with trip reports that cover multiple vendor engagements and how to track the problems themselves rather than just R&D efforts. We will need to better wrap our heads around those two issues before we decide what is viable.
We also gained some valuable insight on some of the perceived cultural problems we’ve explored for the last two weeks. In particular, Angel clued us in to the historical context of why and how CDD might shut down a given project and how that perception still creates challenges even though they’ve worked hard to change their behavior. Given the extent of testimony on the subject since our base visit, we are now in a good position to discuss what kinds of boundaries would minimize a beneficiary’s reluctance to contribute to a central database.
Although it was not our focus this week, our interview with Tim, who built a similar system for the White House, yielded an important insight into encouraging proper data entry. In his system, he created a field where people could list a proposed follow up date, then on that date he would go and ask if that person followed up. This let people know that someone was paying attention to what they were saying and made them more likely to do it properly. Consistent behavior on one end encourages consistent behavior on the other.
Interviews (10/72)
SGM Angel – CDD – Future Concepts
- R&D people look for solutions on their own because they can do it faster than CDD, even though CDD is faster than the rest of the army, and because they tend to be among the few with core competencies in the areas they’re looking at.
- The problem with this is that R&D people don’t always understand the acquisition rules, and CDD doesn’t always have insight into the problem R&D is working on, which can lead to potential rule breaking.
- CDD had a historical tendency to shut people’s projects down when it found out about this kind of rule breaking rather than communicating the problem. This is changing but the perception still exists.
- Believes that for people to freely share that info, they need to feel safe that they’ll be able to work on the projects they’ve determined to be important.
Dr. Jared – Course Adviser
- In our class some people ended up with a prototype because there’s an obvious need and can build something without actually working on their data.
- Sometimes the answer to the problem isn’t necessarily a product, but a process recommendation.
- They kept working on the project after, particularly on the technical approaches. They found it was technically possible but were never able to demonstrate at scale. He’s working on some of the same issues today.
MAJ Mike – CAG in Support of JSOC
- I understand that USASOC is more interested in vendors and products but I’d want to track all engagements. Anytime anyone has a meeting with someone, particularly at another government agency.
- With that in mind, I’d recommend making ‘trip reports’ broad enough to include information about any kind of interaction.
- You will need to contend with the fact that sometimes people have multiple engagements on one trip and make it easy to distinguish between different vendors/agencies/POCs. Otherwise you’ll get data about a few vendors listed under a single vendor.
- Setting aside the specific things I’m interested in, the features themselves seem like something I would use. I’d maybe want the email subscription to do some sorting for me.
COL Mike – CDD Director
- When I talked about scraping data from publicly available proposals, the main thing I was interested in is landing on the right person to talk to. I want to easily know what’s out there and can basically do that now, but it’s harder to find the right point of contact to learn more.
- What I imagine is being able to connect our problem statements to a database query and see what comes back.
- Even on projects that we already decided to spend money, it’s hard to get the people who pushed for those projects to follow up. That’s because deployment might be a couple years away and they have new problems to deal with.
Rob – CDD Operations (MVP Update)
- Liked the basic idea of the mockup and felt he could ID ongoing R&D efforts from it.
- However, would like to track R&D efforts “before they become R&D efforts.”
- In a perfect world, he could track problems as they arise, before they turn into R&D efforts.
CW3 Steve – Network Operations
- Confirmed that USASOC’s networks could support a python web server.
- Requested specifications on memory, storage requirements, etc. prior to implementation.
- They’ll also need some version information so their cybersecurity team can look everything over.
- Once the client is installed he does not foresee any problems.
Tim – Defense Digital Services
- Managed similar problem on White House networks, which wanted to have more modern forms of communication while complying with Presidential Records Act.
- White House also wanted to track tech engagements, had many instances of duplicated trips.
- The consequence of spending so much money on those engagements is they had to be more choosy about which tech they landed on, which is a problem when dealing with stuff that’s going to become obsolete pretty quickly.
- Key implementation tactic: included a field for proposed follow up date, and actually checked in with people to see if they followed up on the promised date. This convinced people that the data they are entering is actually being used.
- Key info for me: who reported it, date submitted, issues and comments, which other issues it’s linked to.
Will – Defense Digital Services
- Our ultimate goal is to put in place the right technologies to mitigate the risks from our adversaries in any possible situation.
- The tech itself is easy to find, the challenge is applying it so that it works for us, then getting it deployed.
- Our team of about 45 consists of engineers, product managers, and ‘bureaucracy hackers.’ The engineers figure out how to make stuff work, the product managers figure out what combination of things we want to use, and the bureaucracy hackers make sure we’re compliant with the law.
MAJ Tony – AFWERX
- Confirmed that our MVP appears to be attempting to solve the correct problem.
- Suggested that civilian companies may have similar issues.
- Increasingly concerned about the input side – people will absolutely have to input data for the system to work but people are lazy.
Nic Vandre – Squadron R&D Deployment
- Described himself as an end user in the field, saying he needs a capability. His role is to take new tech and figure out how it will work at the warfighter edge.
- He gets his tech from two places: whatever CDD has for him and whatever he can find in industry.
- He won’t tell anyone about his projects until he is ready to submit a requirement. Sees the unit as highly autonomous and low on vetting; one person submitting requirement can hold a lot of sway. He primarily vets ideas with his squadron.
- Considers his industry research to be supplemental to what he sees as CDD’s job. Believes CDD should be on top of what industry is offering and keeping him posted on what they’re going to send.
- In general would like a better way to track how CDD is responding to his needs and whether they intend to follow up.
Hypotheses Confirmed/Debunked
- Confirmed: “When using our MVP, beneficiaries can envision the type of information they would look for.” Beneficiaries seem to have a general sense of what they are looking for to solve their problems when they look at the system. Some people are looking to associate a specific name with a given engagement so they can follow up in person. Others want to see what recent activity is going on around a specific type of capability to try to anticipate needs that will take up some portion of the budget. Others want to make sure nobody is working with the same vendor as them or on the same capability as them.
- Partially Confirmed: “Our MVP correctly displays the types of information our beneficiaries want.” Generally speaking, the information that is already on display in the MVP is seen as relevant. However, some people worried that our system wouldn’t do a great job accommodating the existing habit of reporting on multiple engagements in a single trip report and would want to know that the system is attempting to address that on some level. We also heard from multiple people that they’d like to be able to track *problems* and not just what capabilities people are looking into. Rob in particular felt that it would help him in his job if he could see the problem before R&D undertakes any efforts to address it.
- Debunked: “Our MVP currently displays relevant information in the most easily digestible form.” This hypothesis was primarily debunked with the help of our problem sponsor, who gave us a better sense of the volume of entries he’d expect on such a system. In essence, our existing method of displaying the text of the trip report would become overwhelming if people start doing searches that yield hundreds of results. Would instead like to see an easy to scroll through list that he can expand/collapse as needed.
- Confirmed: “If there were zero technical challenges to communication, some beneficiaries may choose not to communicate regardless.” Discussions with Angel and our problem sponsor confirmed that there can be a real fear among some people submitting their projects. Usually because they think their project is the most important thing, and the last thing they want is some higher up who doesn’t know any better coming in and making changes or even shutting it down. Joe suggests there is a gap between fear and reality.
- Partially Debunked: “CDD goes around shutting down people’s projects without knowing what they’re doing.” Joe offered some clarification on this front, suggesting that the typical issue when something needs to be shut down (maybe just temporarily) has to do with complying with acquisition law. Angel supported that assertion in her call but said historically CDD maybe did have a problem with identifying a rule breaker and just shutting the thing down without communicating the problem and trying to have a discussion. She has put in significant work in the last 10 years changing that behavior but thinks that the perception is much harder to shake, even though a lot of the R&D people working now weren’t around back then.
- Confirmed: “Government agencies similar to USASOC could repurpose our MVP to meet their specific needs.” Tony and Mike both suggested that our MVP was rather vendor-centric given their needs, but saw no reason why the engagements they are interested in couldn’t be included on a more generalized form as long as people are willing to report it. In essence, they saw themselves as similar enough to make this kind of solution viable.
Questions for Next Week
- MVP Applications: Is our tagging system effectively capturing key engagement details? Are we pulling data from the right places? What kinds of manual input do we expect and how will they do it? Is there a simple way for this application to track “problems” rather than just capabilities?
- Unit Dynamic: How can we ensure that R&D does not feel they are taking a major risk submitting their data to our system? What boundaries are necessary for each of our beneficiaries to accomplish their goals in this system?
- Dual Use: Can we get anyone from private industry to talk with us directly about this issue? Can we generate signs of interest on our own?
Recent Comments