A new color model designed exclusively for the traffic sign color recognition is presented in this paper. It featured using the high-order of HSV entities to reflect the relationship between various viewing conditions on the road and the traffic sign color appearance. Our model named an adaptive high-order HSV model is the formulation of the ground-truth traffic sign color probabilities built using the extensive United States department of transportation database. Functional link networks employed in our approach to train the traffic sign color PDFs shows the flexibility in the target function modeling with the high accuracy in fitting the model over whole color space gamuts. Additionally, a white balancing method is introduced to recognize achromatic sign colors to accommodate the strength of illumination sources adaptively. The type I and II errors representing the quality of our color model show the outstanding performance in recognizing the traffic sign colors under various viewing conditions.

Under paper reviews and will be appeared after accepted. This page is designed to provide supplementary materials.

The simple way to utilize and share the HO-HSV color model developed here for the traffic sign color recognition is listing up all parameters on a number of tables. However, this method costs the high computational power in addition to large spaces of the publication. This problem has been persisting in almost all image processing works specifically related with new color models. Many of they briefly mentioned which method they used to train their proprietary color models but fail to deliver their model in a reusable form to the reader. To amend this problem, this section proposes the unified and simple method with algorithms to transfer or share the new color model in a standardized way within the group of similar research areas.

Currently most commercial image capturing devices including cameras and video recorder have been developed for the RGB color space. Their sensors are selectively calibrated to accept red, green and blue color spectrums. Therefore, in the view of computation and convenience, the best way to process the image is handling raw data directly at the RGB color space. However, the problem in handling color information in the RGB space already has been argued at the introduction due to their distorted color distance specifically unparallel with what humans feel the color distance should be. This fact led us and many other researchers to utilize the HSV or other color spaces. Then the practical issue naturally arises by this problem is the cost to convert the input image represented using the RGB color space into other color spaces.

Even though we suggested the high-order HSV color space in this paper, its computational cost may hamper its usage if the application domain requires a real-time or corresponding fast performance. Then the other way that costs a bit of memory spaces but can provide the real-time performance is using the direct lookup table (LUT) for all possible sets of RGB color values. Using LUT in image processing (Moreno et al. 1997) is a well-known method specifically for fast processing in which LUT is stored directly into the memory of the image processing DSP (Digital Signal Processing) board or the main computer memory for comparision. This can be done by first allocating the three dimensional memory in a size of 256x256x256 for each R, G, and B, and storing the value to map each RGB colors for the new color model. Then the color space conversion cost just increases linearly, *O(n)*, to the size (width x height) of an input image. Users may dump the entire memory of this buffer for later usages. This method is rather straightforward but the size of the entire memory (ex. The RGB to HSV color space lookup table costs 256x256x256x32 bits which is 67 MB. If the model segments the RGB color space into proprietary number of colors (*N*), then the memory size becomes the function of *N* as shown in Eq. (11) where *a* is the dimension size of the input color space (ex. *a* is typically 24 (bits) for the RGB color space and *b* is 32 (bits) in double precision).

S(M) = N2^(a+b) (11)

In our case, 10 MUTCD colors will be segmented and therefore by Eq. (11), the required storage size to dump the whole color lookup table will be 670 MB. To reduce the size of the lookup table to have it easy to share or transfer the model, it can be compressed using lossless method. The most convenient way to achieve this goal is creating the image in size 4096x4096 (*2^(16+16)=32)*) and put the target color space values into the RGB values of the image.

The pseudo algorithm of this color model LUT method is described below. *ImageBuffer* is the buffer of the color LUT and the size of LUT image is fixed to have a width 4096 and height 4096 with 3 color planes (R, G, B). *Ir* is the memory index of the red color value (typically 0 but some image processing routines allocate the first index for the blue color). *Ig* and *Ib* are for each green and blue colors. If a color model for multiple color segmentation is necessary, *ImageBuffer* and *NewColorModelLUT* should have one more dimension to copy those values.

/* RGB value range: 0 R,G,B 255 */ INTEGER R, G, B INTEGER cur_row = 0, cur_column = 0 /* Assign image buffer */ ImageBuffer = createImageinSize-Width-Height-Colorplanes(4096,4096,3) for (R=0; R < 255; ++R) { for (G = 0; G < 255; ++G) { for (B = 0; B < 255; ++B) { ImageBuffer[cur_row][cur_column][Ir]=NewColorModelLUT[R][G][B][Ir] ImageBuffer[cur_row][cur_column][Ig]=NewColorModelLUT[R][G][B][Ig] ImageBuffer[cur_row][cur_column][Ib]=NewColorModelLUT[R][G][B][Ib] cur_column = cur_column+1 if (cur_column 4096) { cur_row = cur_row+1 cur_column = 0 /* if the memory index starts from 0 */ } } } }

After once the color model LUT image is created, users may store in any image formats supporting lossless compression (ex. GIF, PNG, TIFF, etc.). In our experiment, the entire HO-HSV color model LUT was compressed using PNG format down to 650 KB which is approximately 0.9% of the original buffer size 65 MB. Therefore, users now can exchange the new color model just by sharing one small size image. As a demonstration, the left image of Figure 15 shows the 1.6 million entire RGB values in one image and the right image shows the HO-HSV color model for the MUTCD color recognition.

Its usage is simple. A user may first read the image into the RGB color buffer and use the Eq. (12) to look up the new color model value matched with an input single pixel. In Eq. (12) *floor* is the largest integer that is less than or equal to the input value and *mod* return the remainder of the first input value divided by the second input value. If a color model for multiple color segmentation is necessary, HOHSV and image buffers (IB) should have one more dimension to look up those values. It is also possible to store the probability value directly into the RGB value. In this case, the probability can be encoded into 24 bit by changing the scale from 0 to 16,777,215 and after stored the encoded value at the RGB memory buffer for the designated image buffer address.

HOHSV[Ir]=IB[a][b][Ir], HOHSV[Ig]=IB[a][b][Ig], HOHSV[Ib]=IB[a][b][Ib] where, (12) pi=Ir2^16+Igw^8+Ib,a=floor(pi2^(-12)), and b = mod(pi,2^12).

Each pixel of the output image is the highest probability MUTCD color. Pink color pixels are non-MUTCD color pixels.