A full course introducing MATLAB for Health Data Science

I have created a full course introducing MATLAB for Data Science. The complete course is on YouTube together with full documentation on GitHub. Let me tell you more…

Python and R are leading computer languages in Data Science. Both languages come with excellent tools for analyzing data and sharing our data-driven stories. These and similar languages are open-source and free to use. They have democratized our ability to find solutions using data.

Open-source languages are not the only tools at our disposal, though. Commercial software and languages such as MATLAB from MathWorks are just as viable for use in Data Science. The fact that they are not free should not deter from their use. In fact, MATLAB is still ubiquitous in many industries and institutions.

MATLAB in particular is a fantastic tool for working with and analyzing data. The language and coding platform is mature, and it shows. Add-on apps and built-in functionality allows even those without coding experience to work with data. Using only buttons and dropdown menus we can import data, analyze the data, create plots and figures, do statistical tests, and create models.

Speaking more than one human language is such a powerful skill. The same goes for computer languages. I personally use Python, R, Julia, the Wolfram Language, and MATLAB. Each of these bring something special to the table.

I produce many similar open educational resources for my students. I never know where their careers might take them. All I can do is give them as many resources as I can, including resources that do not make it into a very full curriculum. They know that I support them long after they have left the classroom. Even if you were never a postgraduate student at the George Washington University, I hope that you can find these resources useful too. Maybe it will even inspire you to join us.

The full video course is available on my second channel, and you can find it by HERE.

The full documentation is available on GitHub, and you can find it by clicking HERE.

Improving prompts for better replies from large language models

This post is all about writing better prompts. A prompt is the input text that we write when chatting or otherwise communicating with a generative artificial intelligence model. In many cases, our default position is to be very terse when writing prompts. We have become accustomed to typing very short expressions into a search engine. This is not the case with generative models. They are designed to mimic communication and interaction with a human being. When we want detailed and precise responses from a real person that we are talking to, we are generally more exact in our own words. This is very much the approach we need to take when conversing with a generative model. We would all like generative artificial intelligence models such as ChatGPT to provide us with the perfect response. To do this, we really need to focus on our prompts. Our focus helps the model to return exactly what we need in a response or to set up the chat for a productive conversation.

I am your future ruler, uhm, I mean your friendly generative artificial intelligence model. What can I help you with?

Thinking about and writing proper prompts is now a recognized skill. Prompt engineering is the term that has develop to describe how to communicate effectively with generative models so that they understand our requirements and generate the best possible responses. A quick look at the job market now sees advertisements for prompt engineers. Some of these positions pay quite well.

Courses have been developed to teach prompt engineering and there are many quick tutorials on platforms such as YouTube. This post adds to the long list of help in writing better prompts. The field of generative artificial intelligence is changing at a rapid rate and I will no doubt return to this topic in the future. In this post, I take a quick look at some basic principles to keep in mind when writing a prompt. These principles are mainly used when we first initiate a new chat with a generative models. Subsequent prompts in a chat can indeed be more terse.

In short, a proper prompt should include the components in the list below. Take note that there is some order of importance (from most to least important) to this list and there is, at least to some extent, an overlap between the components.

  • The task that the generative artificial intelligence model should perform
  • The context in which the conversation is taking place
  • One or more examples pertaining the the prompt
  • The persona that the model should take on when responding
  • The format in which the model should respond
  • The tone of speech that the models should write in

We can think of constructing a prompt by writing content for the following placeholders.

[task] + [context] + [exemplar] + [persona] + [format] + [tone]

I would love it if you write your prompt like this. Sincerely, your generative AI model.

It is important to note that not every prompt needs all the information above. It is typical that only the prompt that initiates a chat should include as much information as possible. Below, we take a closer look at each of the components of a proper prompt.

It is usually a good idea to start the task with a verb. Examples include words such as Write …, Create …, Generate …, Complete … , Analyze …, and so on. We should be as precise and direct as possible when writing the task. This is, after all, what we want the model to do. The task might refer to a single action, or indeed, many actions. An example might be the following sentence: Write the definition of the measures of central tendency and provide examples of such measures. This task starts with a verb and contains two instructions.

The context is not always easy to construct. In the case of our example we might add: I am a postgraduate student in public health and I am learning about biostatistics. This context can guide the generative model to return a response that can be used as learning material that should be easier to understand than a more technical response. Additional information such as: I am new to biostatistics or This is the first time I am learning statistics or I am reviewing this content for exam preparation, can be added to the context. The sky is the limit here. This is not to say, that we should overload the model with context. Just enough to guide the model when generating the response, usually performs wonders.

The quality and accuracy of prompts have been shown to increase decidedly when we include examples. Continuing with our reference to measures of central tendency, we might add the following: My textbook includes measures of central tendency such as the arithmetic and geometric mean, the median, and and the mode.

The persona helps the model to generate text in a specific framework. We might write the following: You are a University Professor teaching postgraduate level biostatistics. Clearly, the inclusion of this information should guide the model when generating its response. The response might be quite different if we add the the following persona: You are a middle school teacher or even You are Jedi Master Yoda.

Describing the format allows us to guide how the result should be generated. We might want a simple paragraph of text explaining the measures of central tendency or a bullet-point list of the names of the measures of central tendency and their definitions or a table with columns for measure of central tendency, definition, and example. We have to envision how we want the final result to be formatted. Note that we can also include this information in the examples that we provide in the prompt. The format also ties in with the task. We might want the model to write an essay about the topic or create study notes.

The tone of voice is not always required. We might want to include a specific tone of voice if we plan to use the content generated by the model as our own personal study notes or as formal content for an assignment (given that we have permission to use a model to complete our work or stipulate that we used a model to complete the work). Here we might also mention that we prefer the first or third-person perspective or even of the response should be humorous or very formal.

In human communication we can infer from context, voice intonation, facial expressions, verbal interactions, and much more to attain the information we require. In the case of a generative artificial intelligence model, we have to attempt the same thing, but with our words only. We actually have a lot of practice with this, having moved much of our interactions to email and chat applications. Now we are just chatting with a model.