Not every hospital is capable of dealing with newborns that need critical care and rural hospitals often need to send critically ill infants to larger hospitals for treatment. While the infant may be receiving state-of-the-art care, the transport team must grapple with tiny monitors, cramped spaces, and unreliable communication channels.
On the ground at the receiving hospital, the doctors are also blind to the baby's condition and ETA until the baby is physically in the hospital. While the doctor doesn't need to constantly monitor the baby's condition during transit, the lack. of data during medical crisis could hinder proper treatment. This combination of physical stressors on the team and lack of communication creates additional pressure and uncertainties that could impact a fragile newborn's care.
As the entirety of the NEST project is still in the early research stage, this app needs to serve as a speculative research tool to help us better understand how a reliable live feed may impact the doctor (also called the medical control physician (MCP)) experience of being on standby and receiving a newborn. It will also help us better understand what sort of data the MCP would want to see and how it should be presented to ensure a smooth transport experience.
I was brought on to design a mobile app that helps the MCP monitor the neonate during transport. I oversaw the planning, research, and design of this portion and worked alongside visual designers for the final polish. The final outcome is a mobile application prototype that demonstrated to MCPs what their jobs could be like with this technology. It will help inform the progression of the entire NEST project by providing insight into the MCPs' needs.
What doctors currently see
What our design will let doctors see
NEST will only notify MCPs when vitals indicate a potential or unfolding crisis and will highlight information that may lead to an accurate diagnosis faster.
The MCP can easily take in a wide variety of data such as the baby's vitals and time to arrival to help them diagnose issues faster and recommend more accurate treatment.
The app allows the MCP to connect to the neonate's data through a variety of ways such as live video feed, equipment readouts, and accessing patient chart, all from a single app.
I reviewed participant interview transcriptions with MCPs and made note of the tools mentioned to begin creating a map of what they were and when they were used. I also used created a mood board that helped me visualize the screens that the MCPs would want to look at.
Time is precious and medicine is complicated. I need to build a foundational understanding of the transport process and align our language to prevent misunderstanding.
Using what I learned so far, I compiled a list of vital data and tools to set up my own SME interview with the doctors on our project team to rank what order they want to see the data in, as well as the importance of the data to the transport process.
Finally, I was lucky enough to take a trip to Seattle Children's Hospital's transport department and watch the transport team demonstrate the transport process with the machines.
Having no prior medical experience, the largest hurdle for me was to understand what all the metrics meant, what all the machines did, and how they all contributed to the transport process.
The MCP's role is to help the transport team when the team cannot handle issues on their own and are usually taking care of their own patients. They only care when there is a problem.
Overloaded with numbers and metrics and names, I decided we needed to start by sketching out as many variations to this app as we could. Because we already knew what data and features are required for this stage of the app, we did not need to worry about feature generation.
We reviewed the layouts and data we gathered by rating how well they matched with the insights we learned about the doctors, as well as our understanding of the transportation process. We decided on a branching information architecture, with the vitals screen as the center focus.
Moodboard snippet + inspiration
Early ideation sketches
Once we had a preliminary sketch, I scheduled a second stakeholder meeting affirm our design decisions and make sure we understood the data we needed to display. We also presented our ideas for implementing requested features such as live video stream and GPS monitoring of the baby.
We were able to validate several assumptions about matching the design to real workflows, such as keeping all the vitals data prominently visible. We also got a lot of new information, such as a preference for screens that looked like hospital monitors (design value: familiarity) and explicit labels instead of ambiguous icons.
Using our insights from our research and subsequent design critique with our stakeholders, we generated the following values to help inform our design.
We spent several weeks iterating on the design of the various data pages, focusing mainly on the vitals ("main") page. Our values and insights told us the main page will be the most important for diagnosing the patient, so it was important to get this part right.
Evolution of the home screen
Though this app was meant to be part of a design workshop, I wanted to make sure that we were representing the intent behind this app as accurately as possible. I did not want the MCPs to get distracted by inaccurate data or miss key features that we wanted to showcase. Therefore, I set up a usability test to help us validate the design decisions we made and to check the accuracy of our vitals.
Acknowledging that the doctors we tested with were not pediatricians, we were able to gain valuable feedback about our prototype to help us make the best use of our limited time with the MCPs during the design sessions. Here is what we found:
All of our participants expressed confusion with the placeholder waveforms we used to represent real data. Some were also confused by the inconsistent labelling (patient name vs. John Doe), which caused them to miss the video function.
Internally, we refer to the equipment used during transport by the make and model (ex. Zoll monitor, Sentec), but equipment vary between hospitals. This would cause doctors from different hospitals to respond differently to our solution, despite working the same roles.
Our app is jam packed with data, some of which we placed based on intuition and assumptions. Our non-standard functions like the note-taking function did cause the participants to make several errors during the tests, but all were very excited about the features during the debrief. They all said that they were fine clicking around to learn the app, as long as it didn't take too long at first and helped them accomplish their jobs faster in the long run. So in short, no it's not too complicated.
Because we wanted to understand how doctors might use this app to problem solve on their own, we did not give explicit instructions to our participants to use the live feed to problem solve. As a result they all failed to find the live feed, but were all able to identify the problem because of their familiarity with the data. According to one participant:
"When I see that one lead is flat and the others are working, I know that the lead has fallen off."
After we told them about the feature, they said that they were confused by the use of "placeholder name" instead of "John Doe", as we had been calling the patient. This made them believe that the button lead them back to the main page to check on their other patients.
Leveraging color for communication
As we've learned doctors are highly visual and leveraging the use of color to help distinguish conditions would be a major next step. We won't be using color as the sole distinguishing factor, but we've heard from our SMEs that color blind medical professionals do not suffer negatively from the use of color and it would be a valuable addition to the app.
More informative notifications
As diagnosis is the main task the MCP would want to use this app for, we want to make it even easier to start diagnosing from the get go. We believe that by being able to better get the doctor's attention and priming them to look out for certain vitals, we may be able to improve their ability to diagnose neonate issues during transport.
From this experience, I learned about the value designers provide as intermediaries and sense makers. I didn't invent anything new during this project, but focused on gathering pieces of information scattered around the team and stitching them together into a product that addressed the stakeholder needs. Making sense out of chaos, especially when the experts are too busy saving lives, is a pretty cool thing to do.
I also learned the importance of getting primary data and actually being immersed in the space that you're trying to design for. I struggled for a long time in the beginning trying to make sense of all the vitals, metrics, and roles, and it wasn't until I was able to sit down with the transport team and ask questions ask they demonstrated the devices that I finally had a clear picture of the system in my head.
And finally, but not least-ly, get stakeholder alignment early. That will help reduce ambiguities and allow you to deliver effectively, not anxiously (thanks Heather ❤️).