In my previous post on this topic I left two loose ends: one in the main text about shading in 3D, and one in the comment section to follow-up on a couple of points in Evan’s feedback. I finally managed to go back and spend some time on those and that is what I am posting about today.
Part 1 – apply shading with transparency in 3D with the surf command
I was trying to write some code to apply the shading with transparency and the surf command. In fact, I’ve been trying, and asking around in the Matlab community for more than one year. But to no avail. I think it is not possible to create the shading directly that way. But I did find a workaround. The breakthrough came when I asked myself this question: can I find a way to capture in a variable the color and the shading associated with each pixel in one of the final 2D maps from the previous post? If I could do that, then it would be possible to assign the colors and shading in that variable using this syntax for the surf command:
where data is the gravity matrix and c is the color and shading matrix. To do it in practice I started from a suggestion by Walter Robertson on the Matlab community in his answer to my question on this topic.
The full code to do that is below here, followed by an explanation including 3 figures. As for the other post, since the data set I use is from my unpublished thesis in Geology, I am not able to share it, and you will have to use your own data, but the Matlab code is simply adapted.
%% cell 1 figure; shadedpcolor(x,y,data,(1-normalise(slope)),[-5.9834 2.9969],[0 1],0.45,cube1,0); axis equal; axis off; axis tight shadedcolorbar([-5.9834 2.9969],0.55,cube1);
%% cell 2 freezeColors; H=findobj(gcf,'Type','image'); HNDL=get(H,'cdata'); figure; imagesc(HNDL);colorbar; axis tight axis equal axis off
In cell 2 I use handle graphics and the findobj and get commands to locate the image from Figure 1, capture the color information and store them in the variable HNDL. To test the idea I then plot the image with the default jet colormap. This image I show in figure 2 with colorbar. Notice that even though the colorbar is jet (values of intensity from Figure 1, not gravity residuals), the colors in the map are the correct ones, meaning the method has worked.
%% cell 3 figure; surf(XI,YI,data*5,HNDL);shading interp; set(gca,'YDir','reverse'); daspect([0.5 0.5 10]); view(-25,60) axis tight axis equal axis off shadedcolorbar([-5.9834 2.9969],0.55,cube1);
Finally with cell 3 I use surf to get the desired 3D shaded map, and then shadedcolorbar, the utility associated with shadedpcolor, to add the shaded colorbar.
Part 2 – more on color and shading combination
Reader Evan suggested in the comment section of the previous post with this method I am essentially using two different color parameters, the RGB triplet and the transparency, to display one element of data, the gravity residual “elevation”, and that the slope is shown already in the gradient of the colors. This is essentially correct but we also need to ask ourselves which channel in our visual system will read the color information and how is our brain going to use it.
In this paper Steve Lynch provides a compelling empirical demonstration (through a survey), followed by sound scientific explanation (based on human trivariate color vision theory) of the fact that with color alone our brain would more often than not fail to reconstruct a three-dimensional image from the two-dimensional projection (the colored map). To be able to do that we need the achromatic visual circuit to read the contrast between light and dark. Here below in Figure 4 I follow Steve’s example and separate the information again in two maps.
So why bother using color at all? From the research I’ve done on the subject I gather that our brain uses color more for classification and comparison. There is an interesting slide (which was taken from Russell Taylor’s UNC CS 290 Course Notes) on page 31 of this presentation:
So to go back to Evan’s question I would say that perhaps it is good to use the information redundantly, at least initially. Once our brain has formed a 3D model of the data during the first look, then we can go back and use the transparency for other tasks, like overlaying of different attributes. That will be the subject of the third part of this series, which I will publish in a future post.