It's now time for the very last posting, the one that's going to conclude this blog and the DM2572 course, for me. As a closure, we're going to look back to what we've learned: if we were to think about methods to answer a complex research question - what would we come up with?
"I know one thing : that I know nothing" - Socrates
Much like everything else in this world, as Kant has made us realize in the preface to his Critique of Pure Reason, complexity is relative. Objectivity, or perception of the world through an unbiased an absolute point of view, can not be attained by mankind, leading us to learn to accept and embrace our flaws, all the while keeping critical towards our findings.
This idea, that was introduced from the very first theme of the course, is the one that has sticked with me throughout the whole process of learning about theory and methods for research in DM2572, and that I hope will never disappear - as it is central to the methodology of research, and should never be forgotten.
Keeping that in mind, we'll have to take more time to define properly the research question. Indeed, in research much like in any other field where an answer is needed, the methodology has to be kickstarted by an effective, clear, and precise delimitation of the issue at hand. Indispensable to research design, problem definition is needed in scholarly research of course, but also in research and development for the industry. Indeed, one could think that R&D is, just like what we've been taught in classes through our numerous years of scientific education, merely a way to answer questions already defined by the needs of an user. This could have been true in the past - but today, it seems we're having innovations that we didn't even know we needed before they were here ! Look at Facebook, Instagram, Twitter … They're the result of a problem definition that goes beyond the point of simple needs - they're an answer to a well-crafted design research, and show that problem definition can be the incubator of great and revolutionary ideas so we'll make sure to brainstorm and take the time to examine the issue under various angles.
Now that we've defined our problem - it's time to design our response, the methodology with which we're going to answer the issue. We've learned a lot about quantitative and qualitative methods - it's not really the point of this posting to define them again. To put it simply and in a very short way, quantitative methods rely on mathematical and algorithmic processing of raw data to produce statistics, numeric data that we can use; while qualitative methods involve human processing of the meaning and objectives conveyed by raw data in order to collect and sort the information in a more precise way. We've learned about numerous strategies belonging to both categories - all of which are effective and useful in their own way. So how do we decide which to pick, which to discard? There's no "one is better than the other" - and if there even were, we'd be incapable to choose given the fact that we are, let's say it again, completely biased and with a relative point of view.
Rather than the intuitive idea of choosing beforehand which of those we'd go for, we've learned how much more effective it is to think backwards - about the results we want, the type of theory we want to craft. What do we want to to analyze? What are we answering to? What do we want to shed light on? Given the hypotheses, we want to find the logical link between the different entities at hand, and we're searching for a methodology in order to do so. There are different types of data, each of which is more adapted to collect either through quantitative or through qualitative method. In that sense, the decision of which collection method we're going to use comes parallel to the designing process, that we can thankfully reshape and remodel if we realize mistakes made along the way - with testing, concept-proofing, etc. All the pieces of our handiwork come together at the same time, under a carefully thought-through supervision.
As you'd expect - there is no secret recipe to produce a relevant and robust theory to answer a research question, much less if said research question can be considered as complex, or as a wicked problem. If the area you're exploring is new to you, something foreign and not completely understandable to you, it might even be hard to formulate a hypothesis. Case studies can be seen as a strategy to start - to familiarize yourself with your research topic, and begin working effectively. Indeed, that alone doesn't cut it, and has to be followed by the process aforementioned.
After tremendous efforts and thoughts, we can manage to arrive to a point where we've gone through the design, the crafting and the research itself. Now, what comes next? We've learned about the importance of prototyping, not only for research in the industry but even in a scholar environment. Evaluation is a step we can't just skip, as it is the final validation through which we can evaluate our model.
Of course, the above reflexion is just a rough draft of what I've learned - nothing comes out if there's no practice, and I'm sure that I'll add a lot more to this once I've eventually settled in my master's thesis. This is only the starting point, and I'm glad to have understood so many things that were, to me at least, not that easy to access at the beginning.As you'd expect - there is no secret recipe to produce a relevant and robust theory to answer a research question, much less if said research question can be considered as complex, or as a wicked problem. If the area you're exploring is new to you, something foreign and not completely understandable to you, it might even be hard to formulate a hypothesis. Case studies can be seen as a strategy to start - to familiarize yourself with your research topic, and begin working effectively. Indeed, that alone doesn't cut it, and has to be followed by the process aforementioned.
After tremendous efforts and thoughts, we can manage to arrive to a point where we've gone through the design, the crafting and the research itself. Now, what comes next? We've learned about the importance of prototyping, not only for research in the industry but even in a scholar environment. Evaluation is a step we can't just skip, as it is the final validation through which we can evaluate our model.
The lectures and literatures brought me closer to practice, with structured and precise cases where I could see the research design, and evaluate it with a critical point of view - all the while learning about the upsides and downsides of it. On the other hand, the seminars were a way for me to understand what was not clear at the beginning, to share thoughts and to learn through other people's thoughts - it was of course very inspiring and enriching, and I'm thankful for having been part of such an enticing experience !