Design Thinking

What is Design Thinking?

Design Thinking is an iterative process in which we seek to understand the user, challenge assumptions, and redefine problems to identify alternative strategies and solutions that might not be instantly apparent with our initial level of understanding. At the same time, Design Thinking provides a solution-based approach to solving problems. It is a way of thinking and working as well as a collection of hands-on methods.

Design Thinking revolves around a deep interest in developing an understanding of the people for whom we’re designing the products or services. It helps us observe and develop empathy with the target user. Design Thinking helps us in the process of questioning: questioning the problem, questioning the assumptions, and questioning the implications. Design Thinking is extremely useful in tackling problems that are ill-defined or unknown, by re-framing the problem in human-centric ways, creating many ideas in brainstorming sessions, and experimentation: sketching, prototyping, testing, and trying out concepts and ideas.

Design Thinking Phases

There are 5 phases in the Design Thinking methodology. They are Empathize, Define, Ideate, Prototype, and Test.

Step 1: Empathize

Step 1 focuses on empathizing with your users’ current situation. It encompasses developing a sense of empathy towards the people you are designing for.

There are three steps within the Empathize phase. They include observing, engaging, and immersing.

Observe:

It is your job to gain insights into what your users need, what they want, and how they behave. Capturing behaviors, feelings, and thoughts when users are interacting with products or services in a real-world setting will help you better empathize with them.

Engage:

One way to engage with the people you are working with is to conduct interviews with empathy.

Immerse:

Find ways to “get into users’ shoes”. Bodystorming is a great way to do this. Bodystorming is the act of physically experiencing a situation in order to immerse oneself fully in the users’ environment.

There are several tools that you can use to help you and your team better empathize with your users. They include assuming a beginner’s mindset, asking “why-how-why” and the “5 why’s”, conducting interviews with empathy, building empathy with analogies, using photo and video user-based studies, using personal photo and video journals, engaging with extreme users, utilizing story share-and-capture, bodystorming, and creating a journey map. We will focus on assuming a beginner’s mindset and conducting empathetic interviews.

Assuming a beginner’s mindset requires you to forget your assumptions and personal beliefs. Misconceptions or stereotypes limit the amount of real empathy you can build. A beginner’s mindset allows you to put aside biases and approach and strategize with fresh eyes. You and your team can execute this by avoiding judgement, questioning everything, looking for patterns, and listening without thinking how you are going to respond. Pauses in conversation are okay!

Conducting empathetic interviews are a great way to identify with your users. You and your team can execute this by asking why, encouraging stories and personal experiences from interviewees, embracing silence, and asking neutral questions while not suggesting answers.

Step 2: Define

Step 2 involves synthesizing your observations about the users from the Empathize stage. Create a definition of a meaningful and actionable problem statement, which the design thinker will focus on solving. The definition you create will then kick start the ideation process (stage 3). Defining your point of view encompasses creating a meaningful and actionable problem statement. You should preserve emotion and the individual you are designing for, include clear and strong language, include insight into the statement, and ensure that your statements generate several possibilities.

Some tools that can help you in the Define stage can include point of view (POV), “how might we”, “why-how ladder”, and the “power of ten”. You articulate a POV by combining these three elements – user, need, and insight. Insert your information about the user, their needs, and your insights in the following sentence:

  • [User… (descriptive)] needs [need… (verb)] because [insight… (compelling)].
  • Ex. “Pet owners need to find friends for their pets because pets need to socialize and stay happy and active”.

The “how might we” approach includes asking self-reflective questions.

  • Ex. “How might we help pet owners find friends for their pets so that they socialize and stay happy and active?”

One way that you and your team can execute this step is to ask short questions that launch brainstorms based on a problem statement. This will seed step 3: the Ideation stage.

Step 3: Ideate

The Ideation stage is when you begin generating radical design alternatives. The goal of this stage is to explore a wide solution space that includes both a large quantity and a broad diversity of ideas. From this pool of ideas, you can build prototypes to test with users.

Tools that can help you and your team through the Ideation stage include brainstorming, brain dumping, brain writing, brain walking, challenging assumptions, SCAMPER tool, mind maps, sketching or sketch storms, storyboards, co-creation workshops, prototypes, and creative pauses. The one tool we will be focusing on is the mind map.

The mind map is the process through which the participants build a web of relationships. The way you can execute this with your team is to have all participants create their own problem statement, write their own solutions to that statement, and then link their statements to the solutions between them.

Step 4: Prototype

Designers can provide simple, scaled down versions of their product or services, which can then be used in order to observe, record, judge, and measure performance levels based on specific elements, or general behavior, interactions, and reactions to the overall design. A prototype can be anything that takes a physical form – a wall of post-its, a role-playing activity, or an object. Prototypes are most successful when the subject has input and interacts with the output. There are two types of tools to use when prototyping – low fidelity prototyping and high-fidelity prototyping. Low-fidelity prototyping includes things such as storyboarding and sketching, while high-fidelity prototyping looks and operates closer to the finished product (ex. 3D models, trial implementation of processes).

Step 5: Test

The final step of the Design Thinking process is to test. This is a chance to gather feedback, refine solutions, and continue to learn about your users. This stage also gives you the chance to return to the Ideation process based on lessons learned and continued methodology.

“Prototype as if you know it’s right, listen as if you are wrong”.   – Diego Rodriguez Telechea

How can you execute this step? Let your users compare the alternatives and share their input and perspective. Show, do not tell.

Conclusion

Design Thinking implementation can result in improved and transformed approaches to strategizing solutions to user problems. It gives teams the opportunity to advance strategic skills and new solutions that will benefit both the team and their clients. For more information, contact ROCIMG at info@rocimg.com or (240) 912-1699.

 

ROCIMG

Matthew Wells

August 9, 2021

 

Sources

What is Design Thinking? | Interaction Design Foundation (IxDF) (interaction-design.org)

5 Stages in the Design Thinking Process | Interaction Design Foundation (IxDF) (interaction-design.org)

The Principles of Service Design Thinking – Building Better Services | Interaction Design Foundation (IxDF) (interaction-design.org)

Applying Design Thinking to Public Service Delivery.pdf (businessofgovernment.org)

 

Can a Vendor Management Initiative Influence Organizational Performance?

Can a vendor management initiative influence organizational performance? The concise answer to this question is yes. However, this influence doesn’t occur overnight. Your vendor management initiative must progress through four levels of influence from tactical to strategic to achieve organizational impact. Exploring these Four Levels of Vendor Management Performance will guide you through your vendor management initiative’s maturation of organizational influence.

The Four Levels of Vendor Management Performance are:

  • Basic Vendor Management
  • Vendor Performance
  • Vendor Management Performance
  • Organizational Performance

Basic Vendor Management

Starting with the Basic Vendor Management level, the vendor management team must identify opportunities and execute tactical fundamentals of vendor management. These tactical fundamentals typically include contract negotiations, contract management, spend analytics and routine sourcing activities.

Vendor Performance

Monitoring vendor performance is the next level in our progression of influence. The vendor management team needs to perform vendor service level agreement/objectives (SLA/SLO) monitoring with periodic business reviews along with market intelligence/benchmarks to influence the vendors’ performance.

Vendor Management Performance

Upon establishing competencies at the Vendor Performance level, the vendor management team needs to perform self-evaluation by examining how well the vendor management team is performing. Measuring the team’s performance in areas such as value creation, the quantity of negotiated contracts, spend addressed, and vendor management return on investment will provide insight into the opportunities for improvement and successful accomplishments to highlight.

Organizational Performance

The ultimate aspirational level of the vendor management team is influencing their organization’s operational and financial performance. Leveraging the vendor management team’s knowledge and fluctuations in the market place allows the team to influence the organization’s IT strategy and architecture and even the organization’s products and services. The vendor management team’s success in cost reductions, improving efficiencies, and value creation will contribute to the organization’s financial performance.

The graphic below summarizes the Four Levels of Vendor Management Performance explored above.

Our Take

A significant amount of an organization’s decisions involves information provided by IT. Thus, a highly competent, experienced vendor management team should aspire to influence the operations and financial performance of their organization by being a business solution architect that is a trusted advisor for IT strategy that drives business value by improving operational and financial performance. Understanding the Four Levels of Vendor Management Performance will establish your vendor management initiative in an optimal position to influence the operational and financial performance of the organization.

 

Research by: Steven Jeffery

Info-Tech Research Group

February 3, 2020

Digital Experience Platforms: How Compelling Is Your Online Presence?

An organization’s website is its front door to the world. Consumers and constituents judge organizations based on their ability to provide a modern web experience that is clean and intuitive and allows them to find information and services quickly. In commercial business, a deprecated website is off-putting to prospects and can decrease conversion rates. In the public sector, a poor web experience can lead to frustration from constituents, and subsequently, their elected officials.

Today’s consumers expect an increasing breadth of capabilities on a modern website, from customer portals for self-service to chatbots that steer them to the right resources quickly and effectively. Providing a consistent and compelling web experience is a strategy priority for marketers in every organization.

A keystone application for powering next-generation web experiences is the digital experience platform (or DXP). A modern DXP allows organizations to build and deploy content to multiple endpoints (traditional websites, responsive design, or even dedicated mobile applications). These platforms also provide robust capabilities for dynamic content optimization, multivariate testing, and web analytics.

The lines between DXP and other application categories are blurring. While many DXP vendors began life as web content management solutions, they’re now expanding their scope into adjacent areas for sales, marketing, and service enablement. For example, DXP vendors like Sitecore and Episerver now have proprietary e-commerce solutions. In contrast, other DXPs have broadened their functionality to incorporate social media listening or marketing automation as part of their offering.

Source: SoftwareReviews DXP Data Quadrant, March 2021

For years, DXP has been a fluid (and somewhat ill-defined) category. In framing the DXP category on SoftwareReviews, we set the boundaries around solutions that emphasized the ability to support next-generation web journeys. Many DXP providers are now embracing the notion of “headless content management” – leveraging the platform as a channel-agnostic mechanism to deliver content for the web alongside custom portals or mobile applications.

The importance of integration with other repositories of customer information (particularly CRM platforms) has also risen sharply in the last two years. After all, delivering personalized web experiences relies on what we know about the customer and their behavioral signals and preferences.

As organizations look to modernize their web presence, particularly in response to the COVID-19 pandemic and its impact on digital channel adoption, they will need to draft strong functional requirements to select the best-fit vendor. Using our comprehensive set of resources and vendor evaluations, you can navigate the complex DXP solutions landscape and select a solution that will meet all of your needs.

 

Research by: Ben Dickie

Info-Tech Research Group

April 6, 2021

Enterprise Architecture Trends 2021

1. Architecture guilds are going to become ubiquitous.

This is the idea that different domain architects will collaborate within their architecture guilds, a more formal way of sharing ideas and artifacts and approving architecture decisions. It will start with data architecture guilds, infrastructure architecture guilds, and application architecture guilds, but it will quickly expand to security, business, and integration. This can only happen if the domain architects are given the authority to make good architecture decisions in the moment on behalf of enterprise architecture.

2. Architectural dexterity is going to drive architecture conversation.

From business architecture, through data, application, and technology there is a difference in the ability to quickly respond to business change. Clearly the business architecture should be designed to be most nimble, then the data layer should be almost as nimble, and so on down through the architecture stack. This will dispel the architecture “red tape” reputation and inspire confidence in projects and programs with architecture support.

3. Architecture review boards are going lean.

When an organization starts to instill more trust in their domain architects to make the right decisions and to impart architecture knowledge at the team level, there will be less need for an enterprise architecture review board. Organizations will work to a point where standards, patterns, and architecture decisions are left to the teams and informed up the chain. This very much aligns to the Agile mentality, which is also becoming a standard development and delivery model.

4. Business architecture has become mainstream.

Business architecture used to be the forgotten child of enterprise architecture. There are very good business architects out there but they are illusive. Business architecture is an art, and a very difficult discipline from business analysis. It involves articulating the business capabilities, business roles, drivers, collaborations and business process. Modelling these concepts gives us a way to trace our architecture components to ensure we deliver value that is focused on what the business needs, and exactly when they want it.

5. EA tooling and languages come into their own.

EA tools have been around for years, from Rational Architect, to Erwin, to Sparx and everything in between. Just recently we have seen an influx of organizations subscribe to EA tools such as LeanIX, iServer, Service Now, and iDoc. These tools are more visual and do not require the deep modeling expertise needed for the traditional EA tools, but they still provide good insight into where architecture is helping the business. These platforms are going to be the catalyst for enterprise architecture models to facilitate all business change in modernization and transformation programs alike.

6. Architecture follows the Agile lead.

Gone are the days where enterprise architects would climb their ivory towers and lock themselves away for a few months while they pondered the right target state architecture. The modern approach to architecture is a need for just-in-time synchronization, with just the right stakeholders. Building the idea of lean architecture review boards, this may be in the form of spontaneous collaboration in the moment rather than a heavy weight governance process.

7. Enterprise architecture teams and innovation teams become one.

Innovation will drive goal-aligned business success with new and exciting ideas. However, most innovation projects will fail fast and hopefully be minimal cost. Those that do make it need quick support to permeate through the business processes, application portfolios, and data landscapes. This is where a lean enterprise architect practice can facilitate bringing these innovations to the front lines.

Bottom Line

As we surface at the end of the pandemic, digital transformation, mergers and acquisitions, and modernization programs are going to be everywhere. To guide these programs to success, a stealth approach to enterprise architecture is going to be vital.

 

Research by: Andrew Neill

Info-Tech Research Group

February 12, 2021

Software for Virtual AGMs and Shareholder Meetings

Remote annual general meetings (AGMs) and shareholder meetings have certain end-user requirements that cannot be fully met by standard go-to web conferencing tools (such as Microsoft Teams, Zoom, and Cisco Webex). Some may find that these tools can meet most of their requirements for such meetings – webcasting, live events, registration, analytics, and so on. However, for this use case, there is an overlooked redline requirement: real-time weighted voting.

Real-time weighted voting is a capability that falls outside of standard web conferencing tools’ capabilities. Whereas these tools have polling features that follow “one person, one vote,” this capability cannot be configured to meet more complex resolutions that meet shareholders’ or business leaders’ distributed voting powers.

There does exist specialized software that addresses this gap in the web conferencing tool marketspace. These real-time audience engagement technologies are utilized for meetings and events industries, with in-the-room and cloud-based solutions. For those interested, consider:

  • Lumi Global: A market leader with 25 years of experience providing end-to-end event management services. Suitable for AGMs, conferences, and other events.
  • EventMobi: A customizable events app that includes features such as gamification and networking.
  • Broadridge’s ProxyVote: A digital platform primarily aimed at financial services and securityholder participation.

Of course, organizations that already have web conferencing tool licenses should be cognizant of not unnecessarily bloating their collaboration toolset. Workarounds for weighted voting might include:

  • Collecting the votes before the meeting and reporting the results live.
  • Collecting the votes live and feeding them into a spreadsheet that calculates the weight of that vote. (The potential for technological or logistic problems are highest here.)
  • Collecting the votes live but calculating the results offline.

To manage expectations, ensure there is full communication and transparency about the process.

Source: SoftwareReviews Web Conferencing Data Quadrant. Published April 20, 2020

 

Research by: Thomas Randall

Info-Tech Research Group

June 3, 2020

UCaaS in 2021: The Top Three Trends

2020 has been a year of digital disruption, and the unified communications as a service (UCaaS) marketspace is no exception. COVID-19 forced organizations to rapidly modernize their communication and collaboration infrastructure to enable remote and hybrid work.

UCaaS has never been more important to the business than it is now. Indeed, with 88% of office workers now stating their preference for some form of hybrid remote work, UCaaS has become absolutely critical in providing connectivity, flexibility, and the means for collaboration.

UCaaS Trends for 2021

Here are the top three UCaaS trends for 2021 to watch for:

  1. Hybrid work will demand a rethink of how videoconferencing happens in the workplace. Before the pandemic, perhaps only one or two employees in the average office typically had to dial-in remotely for a meeting. The general experience, though, was one of the technological frustration: setting up their connection was time-consuming and, when remote employees could join, their presence was often forgotten about. As hybrid work becomes the default for 2021, we can expect UCaaS vendors to continue investing in videoconferencing solutions that simplify the dial-in process and create a level playing field experience for all attendees. Various market leaders in the UCaaS space – namely Zoom Rooms, Microsoft Teams Rooms, and Webex Rooms, among others – are already pushing their own solutions into the market, and we can expect other contenders to follow suit.
  2. The demand for data analysis tools will push more investment in AI technology. Data is the new oil: it tells employers about employee productivity, communication engagement, and technology quality of service (QoS). Remote work has only increased the need for visibility into how employers can enhance employee workflows. UCaaS vendors have a key role to play here, with AI-driven capabilities becoming part of a standard offering for market leaders. Detailed insights beyond QoS include sentiment analysis, offering recommendations for improvement, and maintaining compliances – all of which are incredibly valuable to the enterprise.
  3. Open interface standards will become more commonplace. Out-of-the-box UCaaS solutions definitely have their place, but organizations often want to customize their solution and better integrate it with their current IT architecture. For this to occur, UCaaS offerings need to be based on open standards that allow access to an API. With the rise of communication APIs across the market, the lines between UCaaS, communications platform as a service (CPaaS), and contact center as a service (CCaaS) solutions are blurring significantly. In response, we can expect UCaaS vendors in 2021 to further embrace open standards as they strive to remain competitive. This will allow their solutions to slot right into any organization. A byproduct of this is that the UCaaS market will continue maturing, as solutions and table stakes offerings are largely standardized. Despite the recent influx of high-flying contenders to this market, such Microsoft Teams and Zoom Phone, consumer expectations are increasingly shared for basic solutions. Any new player in this space will thus have a benchmark to meet in order to be taken seriously.

Source: SoftwareReviews UCaaSData Quadrant. Accessed December 18, 2020.

If you’re looking to speed up your UCaaS selection process, download Info-Tech’s Rapid Application Selection Framework.

 

Research by: Thomas Randall

Info-Tech Research Group

December 17, 2020

Are Cyberattacks Like Natural Disasters?

Fire, Hurricane, Earthquake, Cyberattack?

Not exactly. Cyberattacks are terrible and require the same dedication to overcome them as would any other disaster response effort. Just like natural disasters, cyberattacks cause millions of dollars in damage, disrupt infrastructure, and impede citizens from their daily lives. US cities like Baltimore, Allentown, and San Antonio have highlighted how cyberattacks are shifting how we think of disasters. Ever since May, Baltimore continues to deal with the cyberattack that shut down many of its services, and estimates put the current damage from the ransomware attack at over $18 million dollars. The city’s mayor and city council president are now calling for the ransomware attack to be classified a federal emergency, which would mark the first categorization of a cyberattack as a disaster that would require federal emergency assistance.

But should the cyberattack levelled against Baltimore be called a disaster? Baltimore believes that its situation merits the designation of “disaster” because the attacker or attackers used the EternalBlue exploit, a cyberweapon developed by the NSA, to enable the Robinhood ransomware attack carried out against the city (SmartCitiesWorld, “Baltimore Calls for Federal Emergency Declaration”). However, many cybersecurity experts have disputed the claim that the EternalBlue exploit was even part of the malware attack, as reported by cybersecurity journalist Brian Krebs (Krebs on Security, “Report: No ‘Eternal Blue’ Exploit Found in Baltimore City Ransomware”). Even if the exploit was part of the ransomware campaign, Microsoft released the fix for that flaw in its operating system two years ago, making it appear as if Baltimore is trying to shift blame and avoid questions over why its systems weren’t patched immediately upon learning of the vulnerability.

Furthermore, what constitutes a disaster is rather difficult to determine. If we’re talking about the sheer cost of damages, according to Yale’s School of Forestry and Environmental Disasters, natural disasters caused $160 billion dollars in damage in 2018 (Yale Environment 360, “Natural Disasters Caused $160 Billion Dollars”). Compare that to ForgeRock’s recent estimation of the cost of data breaches for 2018, which calculated that the exposure of 2.8 billion consumer data records reached an estimated cost of $654 billion dollars (ForgeRock, “U.S. Consumer Data Breach Report 2019”). ForgeRock bases its estimate on the Ponemon Institute’s method for calculating the cost of data breaches in 2018 by taking into account the direct, indirect, and opportunity costs associated with detection and escalation, notification costs, post data breach response, and lost business costs.

Other similar man-made disasters, like the 2017 California wildfire caused by Pacific Gas & Electric, might look similar to Baltimore because of the neglect to update critical infrastructure. In both cases, a disaster was caused by failures in both organizations’ infrastructure, which resulted in severe costs to citizens, organizations, and municipalities. Furthermore, just like natural disasters, cyberattacks are reaching a new level of complexity that challenge traditional response efforts to contain and mitigate their effects.

Assistance Outlook Unclear

Although Baltimore’s case for disaster assistance remains unclear, its situation is far from uncommon. After the 2017 NotPetya attacks that hit Ukraine and then spread around the world, Mondelez International was hit with the ransomware and ended up dealing damage upwards of $100 million for the company. When Mondelez filed an insurance claim for damages with Zurich American Insurance, because its all-risk property insurance policy covered both direct physical losses and indirect expenses from computer failures, its claim was rejected by Zurich because of an exception clause that “hostile or warlike action” protects insurers from dealing with costs related to damage incurred from war (New York Times, “Big Companies Thought Insurance Covered a Cyberattack”).

Because the US government claimed that NotPetya originated from Russian attacks against the Ukraine, insurance companies used this designation as an opportunity to wash their hands of one of the most significant cyberattacks in history. Mondelez, like other companies, have filed complaints against insurance companies, and many of these cases will not be decided for years. But without any clear definitions, companies and municipal governments are effectively collateral damage in cyberwarfare, leaving them at the mercy of more complex and unpredictable attacks.

A Tale of Two, Three, or Even More Cities

Where do we go from here? Many organizations have a mix of current and legacy technologies in their system. An undated risk assessment report for Baltimore’s IT systems, for instance, warned that the city was using computer systems that “were a natural target for hackers and a path for more attacks in the system” (Baltimore Sun, “Baltimore’s Risk Assessment called a pair of aged city computer systems a ‘natural target for hackers'”). Failing to plan for how to deal with known vulnerabilities is planning to fail when those vulnerabilities lead to incidents.

If it’s a matter of finding resources, people, and technology to further mature security strategy, Baltimore could learn something from three UK councils that joined together under one Security Operations Center to improve efficiency, compliance, and security efforts (CSO Online, “Shared SIEM helps 3 UK local governments avoid outsourcing security”). Rather than outsource, which can be expensive and still not address underlying governance and process issues, combining resources allows smaller organizations to build what have some have called Global Security Operations Centers (GSOC). Universities, for instance, have also taken this step, showing that there are use cases for this tactic beyond three small councils in the United Kingdom.

As the above shows, there are serious advantages for building up your own security operations, especially when governments and insurance companies are still trying to figure out what to do for cities like Baltimore or companies like Mondelez.

Recommendations

  • If you’re building a structure on a fault line, you’d build something that mitigates the effects of an earthquake. Take a security by design approach to whatever you build. If you aren’t prepared, don’t blame the disaster. You’re ultimately accountable.
  • Disaster recovery is possible and ensures service continuity in the face of severe disruption.
  • Know your vulnerabilities and act on them. Do not “run to failure” to save money, especially if funding will ultimately save you more money than the cost incurred from future incidents.
  • Keep up to date with your threat intelligence and patch any vulnerabilities as soon as possible. Best practice is to take critical patches from vendors and test them within the week they are released and deployed within 30 days.

Bottom Line

French philosopher Maurice Blanchot wrote “disaster ruins everything, all the while leaving everything intact.” What Blanchot means is that risk is inherent to the way that we live and the way that we operate our organizations. We need to stop thinking about disasters as hypotheticals, because risk is at the center of every decision, action, and endeavor we undertake. Security Operations treats risk as an everyday reality because they embrace risk as the guiding principle of security by never ignoring the risks that could lead to disaster. Take action, because your organizations is ultimately accountable when disaster strikes.

 

Research by: Marc Mazur

Info-Tech Research Group

July 5, 2019