Dynamic Spectrum Access Decisions. George F. ElmasryЧитать онлайн книгу.
alarm indication of the sensed signal. Any DSA design has to consider a tradeoff between these two evaluation measures. Signal measurements in some cases can lend higher probability of detection at lower probability of false alarm, but the tradeoff always exist.
Figure 3A.1 Ideal labeling of a dataset.
Let us consider a dataset where the values in the dataset can be classified as positive (P) or negative (N). As shown in Figure 3A.1, all the values in the dataset ideally can be classified as P or N.
An observer classifying the dataset into P or N may create four outcomes, as shown in Figure 3A.2. The four outcomes are true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Notice that the observer hypothesizes the positive and negative value creating the FP and FN events.
Figure 3A.2 A classifier outcome of the dataset.
The idea behind the ROC model is to create plots such that one axis specifies the false positive rate while the other axis specifies the false negative rate,26 as shown in Figure 3A.3.
Figure 3A.3 Specifying FP and FN rates.
Let us assume that our ROC model plots the false positive (specificity) rate as the x axis and false negative (sensitivity) rate as the y axis. We refer to this ROC model as the ROC space and it is a two‐dimensional space that allows us to create the tradeoff needed in DSA design. A DSA decision‐making process is a classifier in the ROC space.
The ROC Curve as Connecting Points
An ROC point is a point in the ROC space with x and y values where x is the probability of false alarm and y is the probability of detection. Each of the x and y axes spans from 0 to 1. Let us use an example of ROC curves in the ROC space where we simplify the curves by linearly connecting adjacent points. Let us assume that for an example dataset as explained above, we have four possibilities to classify the dataset:
1 Achieve a probability of detection equal to zero at a probability of false alarm equal to zero.
2 Achieve a probability of detection equal to 0.5 at a probability of false alarm equal to 0.25.
3 Achieve a probability of detection equal to 0.75 at a probability of false alarm equal to 0.5.
4 Achieve a probability of detection equal to 1 at a probability of false alarm equal to 1.
These four possibilities become four points in a ROC curve in the ROC space as shown in Figure 3A.4. Notice that each point constitutes a threshold that can be used to hypothesize the dataset value classification. In the ROC space, each point on a ROC curve can become a decision threshold. The key here is to decide an acceptable probability of false alarm and live with the associated probability of detection.
Figure 3A.4 An example of a ROC curve in the ROC space.
3A.2 The ROC Curve Classifications
The ROC space should be used only in certain areas. Figure 3A.5 shows different aspects of the ROC space as follows:
1 The poor performance area. This area should be avoided. The tradeoff can be replaced by a random process.
2 The random cutoff. This is the ROC curve associated with random decision making.
3 The use area where the tradeoff between false alarm and detection probabilities is acceptable.
4 The perfect curve where the probability of detection is always 1. Note that the vertical line should be the decision threshold line in this case.
Figure 3A.5 ROC space working areas and thresholds.
With the ROC space, a classifier that yields acceptable performance should lie in the area between the random ROC curve and the perfect ROC curve. Figure 3A.6 shows multiple classifiers in the ROC space where the top classifier would yield better performance than the bottom classifier.27
Figure 3A.6 Multiple classifier ROC cuves.
Comparing Figure 3A.6 to Figure 3.2 one can see how the DSA ROC space can be conceptualized and how different SNIR ratios create different classifiers. Figure 3.2 shows that a higher SNIR brings the ROC curve closer to the perfect curve and how decision thresholds can be decided as vertical lines where at a given SNIR, a probability of false alarm should be identified as acceptable, leading to a probability of detection.
Notice the importance of decision fusion. A ROC based decision (e.g., signal detection) can be per an RF neighbor or per an antenna sector. While this single ROC decision can seem insufficient because of the presence of false alarm probability, decision fusion from all the RF neighbors or from all the antenna sectors can yield a more accurate signal detection outcome. Distributed cooperative DSA decisions can further increase the decision accuracy and have a centralized arbitrator with a bird's eye view of the area of operation, and a collection of local and distributed decisions can further increase the accuracy of DSA decision making.
Bibliography
1 Federal Communications Commission, Spectrum policy task force report. FCC 02‐155, November, 2002.
2 Federal Communications Commission, Facilitating opportunities for flexible, efficient, and reliable spectrum use employing spectrum agile radio technologies. ET Docket No. 03‐108, December 2003.
3 Federal Communications Commission, Unlicensed operation in the TV broadcast bands and additional spectrum for unlicensed devices below 900 MHz in the 3 GHz band. ET Docket No. 04‐186, May 2004.
4 Federal Communications Commission, E911 requirements for IP‐enabled service providers. ET Docket No. 05‐196, May 2005.
5 Mitola, J., Cognitive radio: an integrated agent