Week 5

The discussion this week was really productive. Some questions, notes, and comments are appended below. There were also a ton of really insightful comments that people sent in, that we never got to talk about in class. Do check them out  - they are brilliant (doc).

Task 1
We are light on readings for this week. Instead let's try and come up with our equivalent to the Polya 'check list' for tackling problems in climate science and Earth sciences. As several people pointed out, our issues may be as much about choosing the right problem as they are how to proceed in solving it. Feel free to interpret the task as loosely as you want.

Please do draw from your own research experiences and fields. And give it plenty of thought.

It will be interesting to see what the areas of overlap and differences are. One goal of the whole class was to explore whether we could identify ways of making our research more efficient in achieving an understanding of messy systems. This exercise is a pretty concrete step in that direction (not to mix metaphors).

Make sure to send everything to David (& me too) - I'll be in Delaware next class.

So no new reading for this week, but we will come back to the following-

Figg - what the heck is a model anyway? (pdf)
Polya - how to solve it excerpts (pdf)

Specific questions:-
  1. How do you define (what are the elements of) a good problem? What is a “good problem”?
  2. Are there a set of principles that would make research/progress on a complex problem more efficient?
  3. Can we made a Polya list for a complex problem?  Draw off experience you have on your own experience/problem
  4. Is it meaningful to break it up a problem in a complex system into smaller problems and trust when you glue it back together (either mathematically, physically or mentally) you are going to be able to solving your problem (closer to truth)?
Task 2
Please think about what would make for a good case study. We have 2 papers lined up about the atmospheric general circulation for the week after next. But it'd be great if we can think of two or three more problems that are examples of good (or bad) problems that we can look at to cogitate what makes them good (or bad). We might pick examples that have been answered, or also, as Justin M suggested, problems that have not yet been answered. Can we apply our check-list (see above!) to get some sense of their tractability. We need to avoid being too exclusive or specialized in these case studies, so it'd be good to come up with lots of possibilities we can pick from.



More random notes (from David):

Understanding & prediction

Culture: need to have a sense of self-criticism;  full disclosure

CANT SEPARATE THE MODEL FROM THE PROBLEM.

Problem  goal   model/tool  (could include many types of models, as defined by Levins). Levins states: “A satisfactory theory comes from a cluster of models”. But how sure can you be that when you glue the submodels together – mathematically or mentally – the model will actually be a good model/theory?

Model   Problem  goal  (this approach could render the “if I have a hammer, all problems are nails” syndrome); or short ciurcut the though process to define what SET of tools are best for the problem.

[As you refine your tools, are you still working on the initial problem?]

GOALS can be described as (by levins):
1.    Generality – wide spread applicability
2.    Precision – sacrifices realism for accuracy; quantitative predictions
3.    Realism – including all the details

For a given model, you are sacrificing something to gain something else.

If you started fresh (no models at your disposal), would you use the same models to solve your problem?

Examples of models/problems we can discuss:
Weather Forecast Model (good example of problem -> goal -> tool)
Testing a hypothesis that comes from collecting paleo data (or any other data). For example, D/O events -> CLIMBER -> hone hypothesis -> is model appropriate, based on what you know? -> next step taken ?
Cloud Resolving Model –
Parameterizations: