Benedikt Poser has been working on functional MRI of many flavours and is based in one of the ultra-high-field places in the world. This is why David wanted some answers from the sequence expert who could call the higher fields his home.

David Brunner: Ultra-high-field systems are around for quite some time. What would you consider the greatest benefit they offer today?

Benedikt Poser: You are right, these ultra-high-field systems have been around for quite some time. If you look at who has been the main driver of ultra-high-field, it is the labs that are mainly into the neuroscience applications. It is quite easily explained as not only the head is ‘relatively easy’ to image at ultra-high-field, but it also benefits anything to do with T2*-weighted functional imaging: The higher resolution, the higher specificity, also the high contrast that comes with changes in what makes up the BOLD signal.

DB: So, you are saying, neuroscientist applications are what drives ultra-high-field?

BP: Yes, historically, this has been the case. The main driving force behind it and what I think continues to push the field strength boundaries are neuroscience applications indeed. The first human 7T was installed at CMRR in 1999. Meanwhile, fantastic technological progress has been made on going well below the neck, and now 18 years later, the first commercial 7T system is available with FDA and CE label – however, so far, still only with support for head and extremity imaging.

DB: Is there a missing key factor in ultra-high-field today? Something you think would leverage the opportunities ultra-high-field gives us today to a different level?

BP: The more you move to higher field, the more it requires the different disciplines to work together to make it work properly. One thing we really have to get under control is the transmit side, but you also need good gradients, good shims and you have to have the right sequences, which of course come with the right reconstructions. This really is a joint effort of engineering, physics, and increasingly, computer science.

DB: What tools are essential to make ultra-high-field not just sensitive and high performing but also robust and comparable?

BP: Basic ingredients required to make it work on a given site are sequence programming, reconstruction and parallel transmission. In terms of comparability, you need to know what the system does and to be able to characterize it well. QA and field monitoring are going to be quite important. If you want to do reconstruction for a fancy sequence that acquires data on one magnet versus another, or one gradient set versus another, you will not be able to plug and play from one system to another without actually knowing the particular behavior or misbehavior of a given gradient set. Field monitoring will be playing an increasingly important role as we go away from standard, simple imaging approaches to sequences that push on the limits of what the hardware can do.

DB: Another technique that seems to take over is 2D simultaneous multi-slice readouts. You have worked on it and also looked into 3D imaging. When comparing the two, what are the pros and cons? And for functional MRI, which technique will prevail?

BP: That remains to be seen. You may note, that 2D multiband becomes increasingly more like a 3D approach as you crank up the multiband factor. Fundamentally, you are doing 3D imaging when you pick up signal from the 3D distribution of the multiple slices. What remains in the 2D SMS, is the acquisition in a single-shot. This is no doubt its main advantage as it ‘freezes out’ physiological noise. In the 3D EPI, you collect the signals over many excitations before reconstructing. So, for the better or worse, any physiological noise and motion effect will get smeared out across the volume. In practice, this is often an advantage though. Considering motion: If you always excite the entire volume, you are not sensitive to bulk motion anymore because the magnetization remains in a steady state, whereas 2D would give you uncorrectable spin-history effects. Next to the inherent SNR advantage, this is one of the key benefits of 3D approaches over 2D.

DB: What does it take to make a 3D robust enough to compete and overtake?

BP: In 3D, we are not struggling with slice profiles anymore. Once you reach resolutions below 1 mm, no one can tell me that they are getting the 2D slice profiles that they want. The problem with 3D is anything dynamic that goes on during the acquisition of the image, such as motion or breathing. You should be able to accurately quantify the field changes associated with motion. Measuring and characterizing the field fluctuations and putting this information into a more advanced reconstruction would improve 3D EPI but also our core anatomical sequences GRE, MPRAGE, SPACE etc. I think dynamic field monitoring is the only way to really characterize what the field distribution in the brain is. With some efforts, optical motion tracking might give precise motion information, but it does not let you deduce the associated field changes that one needs to know for a full correction.

DB: Another subject is to make data open and have open science policies, which is increasingly required by funding agencies and has to be expounded by the applicants. What statements can MRI researchers do in this respect?

BP: This is a question way beyond just open data. I would say it depends on what you want to do with the data. If you want to download your NIFTY files from the Human Connectome and do some analysis on it, it is probably ok. But if you want to take other data and combine it with our own, you have to be very careful in how you merge these data sets together. You will ultimately need to know not only how the data was acquired in terms of the protocol parameters but also what characterizes the system on which it was acquired and how this compares from one acquisition machine to the other.

DB: Do you think that the MRI community has to do more to be well positioned with regard to this question? How well can we search, reuse data, and compare and validate the new data based on the old data compared to other methods?

BP: There are lot of multicenter studies going on that will probably run into these problems. It would be a good starting point if those studies would publish and describe openly what the challenges were, and what we as a community at large should be alerted to: What kind of issues there are and the possible ways to deal with it. There are studies which run on multiple systems. They have to have good quality assurance protocols in place to make sure that each and every machine that is involved performs equally well. And if they do not perform equally well you have to find a way of describing the differences properly to take them into account in the subsequent analysis.

DB: On a closing end, do you think there are opportunities how the MR technology community can change its way of working to have an even larger impact on the outcomes for neuroscience and eventually on the clinics?

BP: This will depend on your local culture in many ways. Taking our microcosm of Maastricht as an example: It is very important that we as developers, and I see myself as a developer in the neurosciences, open up and communicate more with the potential clinical researchers. Overall, I believe, there is still quite a disconnection between ultra-high-field cutting edge methods development and the people who mainly have a clinical background but might immediately benefit from these developments. I am sure, this communication problem is common to other sites as well. A shared platform would be a very good starting point to bring together the users with the developers. The ISMRM by itself is a good example. Half of its members are developers and the other half are clinicians. But even there you see a certain divide with still too little mingling going on between them. I hope we can make it more one community of developers and users in the future. Applications are on the one hand driven by technology but the technology should be equally driven by the applications. And the future applications a user may dream of needs to be communicated to us, the developers. It is a communication that goes both ways.

Benedikt Poser, PhD
Assistant Professor MR Methods at Maastricht University

Ben Poser