cola Report for GDS4393

Date: 2019-12-25 21:34:01 CET, cola version: 1.3.2


Summary

All available functions which can be applied to this res_list object:

res_list
#> A 'ConsensusPartitionList' object with 24 methods.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows are extracted by 'SD, CV, MAD, ATC' methods.
#>   Subgroups are detected by 'hclust, kmeans, skmeans, pam, mclust, NMF' method.
#>   Number of partitions are tried for k = 2, 3, 4, 5, 6.
#>   Performed in total 30000 partitions by row resampling.
#> 
#> Following methods can be applied to this 'ConsensusPartitionList' object:
#>  [1] "cola_report"           "collect_classes"       "collect_plots"         "collect_stats"        
#>  [5] "colnames"              "functional_enrichment" "get_anno_col"          "get_anno"             
#>  [9] "get_classes"           "get_matrix"            "get_membership"        "get_stats"            
#> [13] "is_best_k"             "is_stable_k"           "ncol"                  "nrow"                 
#> [17] "rownames"              "show"                  "suggest_best_k"        "test_to_known_factors"
#> [21] "top_rows_heatmap"      "top_rows_overlap"     
#> 
#> You can get result for a single method by, e.g. object["SD", "hclust"] or object["SD:hclust"]
#> or a subset of methods by object[c("SD", "CV")], c("hclust", "kmeans")]

The call of run_all_consensus_partition_methods() was:

#> run_all_consensus_partition_methods(data = mat, mc.cores = 4, anno = anno)

Dimension of the input matrix:

mat = get_matrix(res_list)
dim(mat)
#> [1] 51941    54

Density distribution

The density distribution for each sample is visualized as in one column in the following heatmap. The clustering is based on the distance which is the Kolmogorov-Smirnov statistic between two distributions.

library(ComplexHeatmap)
densityHeatmap(mat, top_annotation = HeatmapAnnotation(df = get_anno(res_list), 
    col = get_anno_col(res_list)), ylab = "value", cluster_columns = TRUE, show_column_names = FALSE,
    mc.cores = 4)

plot of chunk density-heatmap

Suggest the best k

Folowing table shows the best k (number of partitions) for each combination of top-value methods and partition methods. Clicking on the method name in the table goes to the section for a single combination of methods.

The cola vignette explains the definition of the metrics used for determining the best number of partitions.

suggest_best_k(res_list)
The best k 1-PAC Mean silhouette Concordance Optional k
ATC:kmeans 2 1.000 0.984 0.994 **
ATC:skmeans 3 1.000 0.989 0.993 ** 2
ATC:hclust 2 0.987 0.942 0.973 **
ATC:NMF 2 0.959 0.946 0.976 **
ATC:mclust 3 0.943 0.944 0.976 *
ATC:pam 5 0.934 0.878 0.952 * 2,3
CV:skmeans 2 0.885 0.947 0.974
CV:NMF 2 0.885 0.912 0.964
CV:kmeans 2 0.799 0.842 0.931
MAD:pam 2 0.689 0.872 0.942
MAD:skmeans 2 0.684 0.847 0.935
SD:NMF 2 0.675 0.838 0.933
SD:mclust 4 0.675 0.750 0.829
CV:mclust 4 0.631 0.732 0.852
MAD:NMF 2 0.627 0.838 0.931
SD:skmeans 2 0.623 0.822 0.928
MAD:hclust 4 0.619 0.703 0.846
SD:pam 2 0.600 0.859 0.932
CV:pam 2 0.577 0.857 0.931
MAD:kmeans 2 0.492 0.774 0.890
MAD:mclust 2 0.413 0.850 0.886
CV:hclust 5 0.358 0.557 0.702
SD:kmeans 2 0.275 0.711 0.833
SD:hclust 2 0.184 0.788 0.829

**: 1-PAC > 0.95, *: 1-PAC > 0.9

CDF of consensus matrices

Cumulative distribution function curves of consensus matrix for all methods.

collect_plots(res_list, fun = plot_ecdf)

plot of chunk collect-plots

Consensus heatmap

Consensus heatmaps for all methods. (What is a consensus heatmap?)

collect_plots(res_list, k = 2, fun = consensus_heatmap, mc.cores = 4)

plot of chunk tab-collect-consensus-heatmap-1

Membership heatmap

Membership heatmaps for all methods. (What is a membership heatmap?)

collect_plots(res_list, k = 2, fun = membership_heatmap, mc.cores = 4)

plot of chunk tab-collect-membership-heatmap-1

Signature heatmap

Signature heatmaps for all methods. (What is a signature heatmap?)

Note in following heatmaps, rows are scaled.

collect_plots(res_list, k = 2, fun = get_signatures, mc.cores = 4)

plot of chunk tab-collect-get-signatures-1

Statistics table

The statistics used for measuring the stability of consensus partitioning. (How are they defined?)

get_stats(res_list, k = 2)
#>             k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> SD:NMF      2 0.675           0.838       0.933          0.496 0.508   0.508
#> CV:NMF      2 0.885           0.912       0.964          0.487 0.508   0.508
#> MAD:NMF     2 0.627           0.838       0.931          0.507 0.493   0.493
#> ATC:NMF     2 0.959           0.946       0.976          0.493 0.502   0.502
#> SD:skmeans  2 0.623           0.822       0.928          0.508 0.491   0.491
#> CV:skmeans  2 0.885           0.947       0.974          0.504 0.493   0.493
#> MAD:skmeans 2 0.684           0.847       0.935          0.509 0.493   0.493
#> ATC:skmeans 2 1.000           1.000       1.000          0.507 0.493   0.493
#> SD:mclust   2 0.293           0.602       0.797          0.404 0.491   0.491
#> CV:mclust   2 0.466           0.804       0.840          0.424 0.497   0.497
#> MAD:mclust  2 0.413           0.850       0.886          0.458 0.497   0.497
#> ATC:mclust  2 0.576           0.933       0.953          0.492 0.491   0.491
#> SD:kmeans   2 0.275           0.711       0.833          0.498 0.493   0.493
#> CV:kmeans   2 0.799           0.842       0.931          0.485 0.508   0.508
#> MAD:kmeans  2 0.492           0.774       0.890          0.504 0.497   0.497
#> ATC:kmeans  2 1.000           0.984       0.994          0.506 0.493   0.493
#> SD:pam      2 0.600           0.859       0.932          0.506 0.491   0.491
#> CV:pam      2 0.577           0.857       0.931          0.506 0.491   0.491
#> MAD:pam     2 0.689           0.872       0.942          0.506 0.493   0.493
#> ATC:pam     2 1.000           0.983       0.993          0.509 0.491   0.491
#> SD:hclust   2 0.184           0.788       0.829          0.445 0.497   0.497
#> CV:hclust   2 0.206           0.785       0.855          0.313 0.693   0.693
#> MAD:hclust  2 0.201           0.600       0.785          0.378 0.770   0.770
#> ATC:hclust  2 0.987           0.942       0.973          0.493 0.508   0.508

Following heatmap plots the partition for each combination of methods and the lightness correspond to the silhouette scores for samples in each method. On top the consensus subgroup is inferred from all methods by taking the mean silhouette scores as weight.

collect_stats(res_list, k = 2)

plot of chunk tab-collect-stats-from-consensus-partition-list-1

Partition from all methods

Collect partitions from all methods:

collect_classes(res_list, k = 2)

plot of chunk tab-collect-classes-from-consensus-partition-list-1

Top rows overlap

Overlap of top rows from different top-row methods:

top_rows_overlap(res_list, top_n = 1000, method = "euler")

plot of chunk tab-top-rows-overlap-by-euler-1

Also visualize the correspondance of rankings between different top-row methods:

top_rows_overlap(res_list, top_n = 1000, method = "correspondance")

plot of chunk tab-top-rows-overlap-by-correspondance-1

Heatmaps of the top rows:

top_rows_heatmap(res_list, top_n = 1000)

plot of chunk tab-top-rows-heatmap-1

Test to known annotations

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res_list, k = 2)
#>              n specimen(p) individual(p) k
#> SD:NMF      49    6.13e-09        0.5171 2
#> CV:NMF      52    7.88e-09        0.5719 2
#> MAD:NMF     50    3.04e-07        0.7591 2
#> ATC:NMF     53    2.95e-03        0.8705 2
#> SD:skmeans  51    6.76e-09        0.6799 2
#> CV:skmeans  54    4.40e-08        0.5852 2
#> MAD:skmeans 50    3.04e-07        0.5704 2
#> ATC:skmeans 54    4.95e-04        1.0000 2
#> SD:mclust   47    2.95e-07        0.6402 2
#> CV:mclust   51    1.08e-08        0.4676 2
#> MAD:mclust  53    6.54e-06        0.4820 2
#> ATC:mclust  53    3.14e-04        1.0000 2
#> SD:kmeans   48    1.47e-09        0.5255 2
#> CV:kmeans   49    6.13e-09        0.6823 2
#> MAD:kmeans  50    3.04e-07        0.5704 2
#> ATC:kmeans  53    6.98e-04        1.0000 2
#> SD:pam      53    4.12e-06        0.8887 2
#> CV:pam      52    4.92e-05        0.5788 2
#> MAD:pam     51    4.64e-05        0.6799 2
#> ATC:pam     54    2.21e-04        1.0000 2
#> SD:hclust   52    3.05e-01        0.5631 2
#> CV:hclust   54    9.21e-04        0.0798 2
#> MAD:hclust  47    9.62e-01        0.7029 2
#> ATC:hclust  53    2.95e-03        0.8705 2

Results for each method


SD:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "hclust"]
# you can also extract it by
# res = res_list["SD:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.184           0.788       0.829         0.4447 0.497   0.497
#> 3 3 0.219           0.673       0.772         0.2852 0.911   0.820
#> 4 4 0.427           0.651       0.791         0.2432 0.843   0.616
#> 5 5 0.543           0.665       0.779         0.0496 0.973   0.891
#> 6 6 0.613           0.628       0.760         0.0586 1.000   1.000

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     1  0.0376      0.820 0.996 0.004
#> GSM710829     2  0.8267      0.764 0.260 0.740
#> GSM710839     1  0.0938      0.816 0.988 0.012
#> GSM710841     2  0.8267      0.764 0.260 0.740
#> GSM710843     1  0.6148      0.827 0.848 0.152
#> GSM710845     1  0.0000      0.817 1.000 0.000
#> GSM710846     2  0.8661      0.708 0.288 0.712
#> GSM710849     2  0.8267      0.764 0.260 0.740
#> GSM710853     2  0.0938      0.774 0.012 0.988
#> GSM710855     2  0.0376      0.768 0.004 0.996
#> GSM710858     2  0.0938      0.774 0.012 0.988
#> GSM710860     1  0.0938      0.816 0.988 0.012
#> GSM710801     2  0.8144      0.772 0.252 0.748
#> GSM710813     2  0.9044      0.705 0.320 0.680
#> GSM710814     1  0.1414      0.818 0.980 0.020
#> GSM710815     1  0.5178      0.826 0.884 0.116
#> GSM710816     1  0.0376      0.816 0.996 0.004
#> GSM710817     2  0.4298      0.801 0.088 0.912
#> GSM710818     1  0.0376      0.820 0.996 0.004
#> GSM710819     2  0.6801      0.654 0.180 0.820
#> GSM710820     2  0.8267      0.764 0.260 0.740
#> GSM710830     1  0.7056      0.843 0.808 0.192
#> GSM710831     2  0.4431      0.802 0.092 0.908
#> GSM710832     1  0.7056      0.843 0.808 0.192
#> GSM710833     2  0.7528      0.791 0.216 0.784
#> GSM710834     1  0.0376      0.819 0.996 0.004
#> GSM710835     2  0.7453      0.810 0.212 0.788
#> GSM710836     2  0.7528      0.790 0.216 0.784
#> GSM710837     2  0.7453      0.801 0.212 0.788
#> GSM710862     1  0.3879      0.848 0.924 0.076
#> GSM710863     1  0.6623      0.851 0.828 0.172
#> GSM710865     1  0.6623      0.851 0.828 0.172
#> GSM710867     1  0.7219      0.836 0.800 0.200
#> GSM710869     2  0.8608      0.759 0.284 0.716
#> GSM710871     1  0.7219      0.836 0.800 0.200
#> GSM710873     2  0.4690      0.772 0.100 0.900
#> GSM710802     1  0.5178      0.826 0.884 0.116
#> GSM710803     1  0.7056      0.843 0.808 0.192
#> GSM710804     2  0.7453      0.810 0.212 0.788
#> GSM710805     2  0.9552      0.677 0.376 0.624
#> GSM710806     2  0.7528      0.808 0.216 0.784
#> GSM710807     2  0.7219      0.806 0.200 0.800
#> GSM710808     1  0.6973      0.844 0.812 0.188
#> GSM710809     2  0.5946      0.812 0.144 0.856
#> GSM710810     1  0.4690      0.851 0.900 0.100
#> GSM710811     1  0.7139      0.840 0.804 0.196
#> GSM710812     1  0.6973      0.844 0.812 0.188
#> GSM710821     1  0.6343      0.854 0.840 0.160
#> GSM710822     1  0.9922      0.364 0.552 0.448
#> GSM710823     1  0.9922      0.364 0.552 0.448
#> GSM710824     1  0.1633      0.828 0.976 0.024
#> GSM710825     1  0.6247      0.854 0.844 0.156
#> GSM710826     1  0.7056      0.843 0.808 0.192
#> GSM710827     1  0.6623      0.851 0.828 0.172

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n specimen(p) individual(p) k
#> SD:hclust 52    3.05e-01         0.563 2
#> SD:hclust 48    3.28e-02         0.725 3
#> SD:hclust 44    1.66e-04         0.938 4
#> SD:hclust 46    5.81e-05         0.366 5
#> SD:hclust 42    5.08e-05         0.447 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "kmeans"]
# you can also extract it by
# res = res_list["SD:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.275           0.711       0.833         0.4976 0.493   0.493
#> 3 3 0.437           0.584       0.789         0.3462 0.704   0.469
#> 4 4 0.577           0.639       0.761         0.1239 0.830   0.540
#> 5 5 0.609           0.582       0.710         0.0623 0.930   0.729
#> 6 6 0.647           0.527       0.709         0.0422 0.938   0.709

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.6343    0.74933 0.160 0.840
#> GSM710829     2  0.0672    0.77232 0.008 0.992
#> GSM710839     2  0.6343    0.74933 0.160 0.840
#> GSM710841     2  0.5178    0.73294 0.116 0.884
#> GSM710843     2  0.5737    0.75729 0.136 0.864
#> GSM710845     2  0.9977    0.23821 0.472 0.528
#> GSM710846     2  0.0000    0.77170 0.000 1.000
#> GSM710849     2  0.5059    0.73550 0.112 0.888
#> GSM710853     2  0.0000    0.77170 0.000 1.000
#> GSM710855     2  0.5946    0.70588 0.144 0.856
#> GSM710858     2  0.0376    0.77022 0.004 0.996
#> GSM710860     2  0.6343    0.74933 0.160 0.840
#> GSM710801     2  0.1633    0.77419 0.024 0.976
#> GSM710813     2  0.1184    0.77223 0.016 0.984
#> GSM710814     2  0.6343    0.74933 0.160 0.840
#> GSM710815     2  0.5737    0.75729 0.136 0.864
#> GSM710816     2  0.6343    0.74933 0.160 0.840
#> GSM710817     2  0.9608    0.36186 0.384 0.616
#> GSM710818     2  0.8327    0.66539 0.264 0.736
#> GSM710819     2  0.9977    0.00281 0.472 0.528
#> GSM710820     2  0.0000    0.77170 0.000 1.000
#> GSM710830     1  0.0376    0.86620 0.996 0.004
#> GSM710831     2  0.8861    0.51605 0.304 0.696
#> GSM710832     1  0.0376    0.86620 0.996 0.004
#> GSM710833     2  0.9977    0.00281 0.472 0.528
#> GSM710834     1  0.8081    0.54430 0.752 0.248
#> GSM710835     1  0.8443    0.65197 0.728 0.272
#> GSM710836     1  0.6801    0.77723 0.820 0.180
#> GSM710837     1  0.5842    0.80021 0.860 0.140
#> GSM710862     1  0.4431    0.80078 0.908 0.092
#> GSM710863     1  0.0376    0.86620 0.996 0.004
#> GSM710865     1  0.0376    0.86620 0.996 0.004
#> GSM710867     1  0.4815    0.82541 0.896 0.104
#> GSM710869     1  0.6438    0.80225 0.836 0.164
#> GSM710871     1  0.0376    0.86620 0.996 0.004
#> GSM710873     1  0.6343    0.78259 0.840 0.160
#> GSM710802     1  0.0376    0.86620 0.996 0.004
#> GSM710803     1  0.0376    0.86620 0.996 0.004
#> GSM710804     2  0.9815    0.26240 0.420 0.580
#> GSM710805     2  0.6148    0.70527 0.152 0.848
#> GSM710806     1  0.7376    0.75174 0.792 0.208
#> GSM710807     1  0.5842    0.80021 0.860 0.140
#> GSM710808     1  0.0376    0.86620 0.996 0.004
#> GSM710809     1  0.7674    0.72127 0.776 0.224
#> GSM710810     1  0.2603    0.84387 0.956 0.044
#> GSM710811     1  0.0376    0.86620 0.996 0.004
#> GSM710812     1  0.0376    0.86620 0.996 0.004
#> GSM710821     1  0.1184    0.86004 0.984 0.016
#> GSM710822     1  0.4815    0.82541 0.896 0.104
#> GSM710823     1  0.7815    0.62208 0.768 0.232
#> GSM710824     1  0.9635    0.21063 0.612 0.388
#> GSM710825     1  0.4939    0.77086 0.892 0.108
#> GSM710826     1  0.0376    0.86620 0.996 0.004
#> GSM710827     1  0.0376    0.86620 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n specimen(p) individual(p) k
#> SD:kmeans 48    1.47e-09         0.526 2
#> SD:kmeans 43    1.12e-08         0.372 3
#> SD:kmeans 41    1.17e-06         0.417 4
#> SD:kmeans 42    1.00e-05         0.649 5
#> SD:kmeans 39    2.41e-05         0.798 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "skmeans"]
# you can also extract it by
# res = res_list["SD:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.623           0.822       0.928         0.5081 0.491   0.491
#> 3 3 0.587           0.800       0.869         0.3292 0.759   0.544
#> 4 4 0.636           0.623       0.805         0.1226 0.859   0.601
#> 5 5 0.602           0.517       0.716         0.0615 0.925   0.715
#> 6 6 0.628           0.459       0.667         0.0396 0.921   0.660

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000     0.9017 0.000 1.000
#> GSM710829     2  0.0000     0.9017 0.000 1.000
#> GSM710839     2  0.0000     0.9017 0.000 1.000
#> GSM710841     2  0.2236     0.8826 0.036 0.964
#> GSM710843     2  0.0000     0.9017 0.000 1.000
#> GSM710845     2  0.6801     0.7527 0.180 0.820
#> GSM710846     2  0.0000     0.9017 0.000 1.000
#> GSM710849     2  0.2236     0.8826 0.036 0.964
#> GSM710853     2  0.0000     0.9017 0.000 1.000
#> GSM710855     2  0.5519     0.8054 0.128 0.872
#> GSM710858     2  0.0000     0.9017 0.000 1.000
#> GSM710860     2  0.0000     0.9017 0.000 1.000
#> GSM710801     2  0.0000     0.9017 0.000 1.000
#> GSM710813     2  0.0000     0.9017 0.000 1.000
#> GSM710814     2  0.0000     0.9017 0.000 1.000
#> GSM710815     2  0.0000     0.9017 0.000 1.000
#> GSM710816     2  0.0000     0.9017 0.000 1.000
#> GSM710817     2  0.7219     0.7227 0.200 0.800
#> GSM710818     2  0.7219     0.7233 0.200 0.800
#> GSM710819     2  0.9996     0.0391 0.488 0.512
#> GSM710820     2  0.0000     0.9017 0.000 1.000
#> GSM710830     1  0.0000     0.9260 1.000 0.000
#> GSM710831     2  0.0672     0.8982 0.008 0.992
#> GSM710832     1  0.0000     0.9260 1.000 0.000
#> GSM710833     2  0.9954     0.1407 0.460 0.540
#> GSM710834     1  0.9996    -0.0316 0.512 0.488
#> GSM710835     1  0.9000     0.5312 0.684 0.316
#> GSM710836     1  0.2043     0.9059 0.968 0.032
#> GSM710837     1  0.0000     0.9260 1.000 0.000
#> GSM710862     1  0.5519     0.8124 0.872 0.128
#> GSM710863     1  0.0000     0.9260 1.000 0.000
#> GSM710865     1  0.0000     0.9260 1.000 0.000
#> GSM710867     1  0.0000     0.9260 1.000 0.000
#> GSM710869     1  0.2778     0.8939 0.952 0.048
#> GSM710871     1  0.0000     0.9260 1.000 0.000
#> GSM710873     1  0.0000     0.9260 1.000 0.000
#> GSM710802     1  0.0000     0.9260 1.000 0.000
#> GSM710803     1  0.0000     0.9260 1.000 0.000
#> GSM710804     2  0.8267     0.6297 0.260 0.740
#> GSM710805     2  0.0000     0.9017 0.000 1.000
#> GSM710806     1  0.7139     0.7204 0.804 0.196
#> GSM710807     1  0.0000     0.9260 1.000 0.000
#> GSM710808     1  0.0000     0.9260 1.000 0.000
#> GSM710809     1  0.7139     0.7310 0.804 0.196
#> GSM710810     1  0.2423     0.9001 0.960 0.040
#> GSM710811     1  0.0000     0.9260 1.000 0.000
#> GSM710812     1  0.0000     0.9260 1.000 0.000
#> GSM710821     1  0.0000     0.9260 1.000 0.000
#> GSM710822     1  0.0000     0.9260 1.000 0.000
#> GSM710823     1  0.8713     0.5495 0.708 0.292
#> GSM710824     2  0.6247     0.7778 0.156 0.844
#> GSM710825     1  0.0000     0.9260 1.000 0.000
#> GSM710826     1  0.0000     0.9260 1.000 0.000
#> GSM710827     1  0.0000     0.9260 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> SD:skmeans 51    6.76e-09         0.680 2
#> SD:skmeans 52    5.98e-09         0.683 3
#> SD:skmeans 42    6.28e-07         0.508 4
#> SD:skmeans 35    2.86e-06         0.860 5
#> SD:skmeans 31    7.96e-04         0.924 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "pam"]
# you can also extract it by
# res = res_list["SD:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.600           0.859       0.932         0.5056 0.491   0.491
#> 3 3 0.651           0.742       0.883         0.3301 0.734   0.509
#> 4 4 0.641           0.636       0.779         0.1150 0.817   0.517
#> 5 5 0.653           0.581       0.766         0.0696 0.925   0.708
#> 6 6 0.699           0.600       0.786         0.0422 0.918   0.619

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.2236      0.888 0.036 0.964
#> GSM710829     2  0.2236      0.891 0.036 0.964
#> GSM710839     2  0.0000      0.899 0.000 1.000
#> GSM710841     2  0.5519      0.835 0.128 0.872
#> GSM710843     2  0.0000      0.899 0.000 1.000
#> GSM710845     1  0.4022      0.889 0.920 0.080
#> GSM710846     2  0.0000      0.899 0.000 1.000
#> GSM710849     2  0.0672      0.898 0.008 0.992
#> GSM710853     2  0.0000      0.899 0.000 1.000
#> GSM710855     2  0.0672      0.898 0.008 0.992
#> GSM710858     2  0.0000      0.899 0.000 1.000
#> GSM710860     2  0.0000      0.899 0.000 1.000
#> GSM710801     2  0.0000      0.899 0.000 1.000
#> GSM710813     2  0.2423      0.889 0.040 0.960
#> GSM710814     2  0.0000      0.899 0.000 1.000
#> GSM710815     2  0.0000      0.899 0.000 1.000
#> GSM710816     2  0.5737      0.816 0.136 0.864
#> GSM710817     2  0.5946      0.822 0.144 0.856
#> GSM710818     2  0.2948      0.881 0.052 0.948
#> GSM710819     1  0.8267      0.636 0.740 0.260
#> GSM710820     2  0.0000      0.899 0.000 1.000
#> GSM710830     1  0.0000      0.943 1.000 0.000
#> GSM710831     2  0.0000      0.899 0.000 1.000
#> GSM710832     1  0.0000      0.943 1.000 0.000
#> GSM710833     2  0.5294      0.832 0.120 0.880
#> GSM710834     1  0.8327      0.630 0.736 0.264
#> GSM710835     2  0.8327      0.694 0.264 0.736
#> GSM710836     1  0.5059      0.853 0.888 0.112
#> GSM710837     1  0.0000      0.943 1.000 0.000
#> GSM710862     1  0.1184      0.934 0.984 0.016
#> GSM710863     1  0.0000      0.943 1.000 0.000
#> GSM710865     1  0.0000      0.943 1.000 0.000
#> GSM710867     1  0.0000      0.943 1.000 0.000
#> GSM710869     1  0.0000      0.943 1.000 0.000
#> GSM710871     1  0.0000      0.943 1.000 0.000
#> GSM710873     1  0.5629      0.833 0.868 0.132
#> GSM710802     1  0.1184      0.936 0.984 0.016
#> GSM710803     1  0.2236      0.923 0.964 0.036
#> GSM710804     2  0.8144      0.704 0.252 0.748
#> GSM710805     2  0.8144      0.704 0.252 0.748
#> GSM710806     2  0.9983      0.163 0.476 0.524
#> GSM710807     1  0.0938      0.938 0.988 0.012
#> GSM710808     1  0.0000      0.943 1.000 0.000
#> GSM710809     2  0.8207      0.699 0.256 0.744
#> GSM710810     1  0.0000      0.943 1.000 0.000
#> GSM710811     1  0.3114      0.910 0.944 0.056
#> GSM710812     1  0.0000      0.943 1.000 0.000
#> GSM710821     1  0.0000      0.943 1.000 0.000
#> GSM710822     1  0.0000      0.943 1.000 0.000
#> GSM710823     1  0.9000      0.532 0.684 0.316
#> GSM710824     2  0.6148      0.801 0.152 0.848
#> GSM710825     1  0.0000      0.943 1.000 0.000
#> GSM710826     1  0.0000      0.943 1.000 0.000
#> GSM710827     1  0.0000      0.943 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n specimen(p) individual(p) k
#> SD:pam 53    4.12e-06         0.889 2
#> SD:pam 46    1.40e-07         0.790 3
#> SD:pam 44    2.68e-05         0.896 4
#> SD:pam 36    4.68e-04         0.699 5
#> SD:pam 38    1.08e-04         0.334 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "mclust"]
# you can also extract it by
# res = res_list["SD:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 4.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.293           0.602       0.797         0.4037 0.491   0.491
#> 3 3 0.303           0.509       0.769         0.4396 0.610   0.377
#> 4 4 0.675           0.750       0.829         0.2780 0.795   0.501
#> 5 5 0.650           0.691       0.806         0.0557 0.951   0.810
#> 6 6 0.685           0.592       0.786         0.0504 0.925   0.674

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 4

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000     0.7270 0.000 1.000
#> GSM710829     2  0.3879     0.7653 0.076 0.924
#> GSM710839     2  0.0000     0.7270 0.000 1.000
#> GSM710841     2  0.3879     0.7653 0.076 0.924
#> GSM710843     2  0.0000     0.7270 0.000 1.000
#> GSM710845     2  0.9815     0.1688 0.420 0.580
#> GSM710846     2  0.3879     0.7653 0.076 0.924
#> GSM710849     2  0.3879     0.7653 0.076 0.924
#> GSM710853     2  0.3879     0.7653 0.076 0.924
#> GSM710855     2  0.8267     0.5255 0.260 0.740
#> GSM710858     2  0.3879     0.7653 0.076 0.924
#> GSM710860     2  0.0000     0.7270 0.000 1.000
#> GSM710801     2  0.3879     0.7653 0.076 0.924
#> GSM710813     2  0.8207     0.6171 0.256 0.744
#> GSM710814     2  0.0000     0.7270 0.000 1.000
#> GSM710815     2  0.0000     0.7270 0.000 1.000
#> GSM710816     2  0.7056     0.6314 0.192 0.808
#> GSM710817     2  0.8327     0.6058 0.264 0.736
#> GSM710818     2  0.9522     0.2158 0.372 0.628
#> GSM710819     1  0.9522     0.6189 0.628 0.372
#> GSM710820     2  0.3879     0.7653 0.076 0.924
#> GSM710830     1  0.0672     0.6271 0.992 0.008
#> GSM710831     2  0.8386     0.5995 0.268 0.732
#> GSM710832     1  0.0000     0.6207 1.000 0.000
#> GSM710833     1  0.9522     0.6189 0.628 0.372
#> GSM710834     2  0.9686     0.2675 0.396 0.604
#> GSM710835     2  0.8955     0.5110 0.312 0.688
#> GSM710836     1  0.9522     0.6189 0.628 0.372
#> GSM710837     1  0.9522     0.6189 0.628 0.372
#> GSM710862     1  0.9460     0.6226 0.636 0.364
#> GSM710863     1  0.4562     0.6504 0.904 0.096
#> GSM710865     1  0.6048     0.6523 0.852 0.148
#> GSM710867     1  0.9427     0.6236 0.640 0.360
#> GSM710869     1  0.9522     0.6189 0.628 0.372
#> GSM710871     1  0.1184     0.6306 0.984 0.016
#> GSM710873     1  0.9522     0.6189 0.628 0.372
#> GSM710802     1  0.9522     0.6189 0.628 0.372
#> GSM710803     1  0.0000     0.6207 1.000 0.000
#> GSM710804     2  0.3879     0.7653 0.076 0.924
#> GSM710805     2  0.8386     0.5995 0.268 0.732
#> GSM710806     2  0.9944    -0.0146 0.456 0.544
#> GSM710807     1  0.9522     0.6189 0.628 0.372
#> GSM710808     1  0.1843     0.6275 0.972 0.028
#> GSM710809     1  0.9933     0.3937 0.548 0.452
#> GSM710810     1  0.9460     0.6188 0.636 0.364
#> GSM710811     1  0.0672     0.6271 0.992 0.008
#> GSM710812     1  0.5059     0.6528 0.888 0.112
#> GSM710821     1  0.6801     0.5791 0.820 0.180
#> GSM710822     1  0.9522     0.6189 0.628 0.372
#> GSM710823     1  0.9522     0.6189 0.628 0.372
#> GSM710824     2  0.9833     0.1376 0.424 0.576
#> GSM710825     1  0.9933     0.3882 0.548 0.452
#> GSM710826     1  0.0672     0.6271 0.992 0.008
#> GSM710827     1  0.0672     0.6267 0.992 0.008

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n specimen(p) individual(p) k
#> SD:mclust 47    2.95e-07         0.640 2
#> SD:mclust 33    6.38e-06         0.390 3
#> SD:mclust 52    3.13e-07         0.766 4
#> SD:mclust 48    2.38e-08         0.796 5
#> SD:mclust 40    1.56e-05         0.972 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "NMF"]
# you can also extract it by
# res = res_list["SD:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.675           0.838       0.933         0.4962 0.508   0.508
#> 3 3 0.479           0.663       0.813         0.3402 0.809   0.631
#> 4 4 0.554           0.646       0.760         0.1248 0.788   0.462
#> 5 5 0.576           0.443       0.726         0.0682 0.811   0.393
#> 6 6 0.643           0.499       0.715         0.0460 0.840   0.381

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000    0.92852 0.000 1.000
#> GSM710829     2  0.0938    0.92322 0.012 0.988
#> GSM710839     2  0.0000    0.92852 0.000 1.000
#> GSM710841     2  0.5294    0.83540 0.120 0.880
#> GSM710843     2  0.0000    0.92852 0.000 1.000
#> GSM710845     2  0.0000    0.92852 0.000 1.000
#> GSM710846     2  0.0000    0.92852 0.000 1.000
#> GSM710849     2  0.2043    0.91163 0.032 0.968
#> GSM710853     2  0.0000    0.92852 0.000 1.000
#> GSM710855     2  0.9460    0.36288 0.364 0.636
#> GSM710858     2  0.0000    0.92852 0.000 1.000
#> GSM710860     2  0.0000    0.92852 0.000 1.000
#> GSM710801     2  0.0000    0.92852 0.000 1.000
#> GSM710813     2  0.5737    0.81756 0.136 0.864
#> GSM710814     2  0.0000    0.92852 0.000 1.000
#> GSM710815     2  0.0000    0.92852 0.000 1.000
#> GSM710816     2  0.0000    0.92852 0.000 1.000
#> GSM710817     1  0.7453    0.71387 0.788 0.212
#> GSM710818     2  0.0376    0.92674 0.004 0.996
#> GSM710819     1  0.9358    0.47920 0.648 0.352
#> GSM710820     2  0.0000    0.92852 0.000 1.000
#> GSM710830     1  0.0000    0.91896 1.000 0.000
#> GSM710831     1  0.9954    0.13509 0.540 0.460
#> GSM710832     1  0.0000    0.91896 1.000 0.000
#> GSM710833     1  0.9427    0.46225 0.640 0.360
#> GSM710834     2  1.0000    0.00233 0.496 0.504
#> GSM710835     1  0.0000    0.91896 1.000 0.000
#> GSM710836     1  0.0000    0.91896 1.000 0.000
#> GSM710837     1  0.0000    0.91896 1.000 0.000
#> GSM710862     1  0.6712    0.76473 0.824 0.176
#> GSM710863     1  0.0000    0.91896 1.000 0.000
#> GSM710865     1  0.0000    0.91896 1.000 0.000
#> GSM710867     1  0.0000    0.91896 1.000 0.000
#> GSM710869     1  0.0000    0.91896 1.000 0.000
#> GSM710871     1  0.0000    0.91896 1.000 0.000
#> GSM710873     1  0.0000    0.91896 1.000 0.000
#> GSM710802     1  0.0000    0.91896 1.000 0.000
#> GSM710803     1  0.0000    0.91896 1.000 0.000
#> GSM710804     1  0.6712    0.76024 0.824 0.176
#> GSM710805     2  0.3879    0.88091 0.076 0.924
#> GSM710806     1  0.0000    0.91896 1.000 0.000
#> GSM710807     1  0.0000    0.91896 1.000 0.000
#> GSM710808     1  0.0000    0.91896 1.000 0.000
#> GSM710809     1  0.0000    0.91896 1.000 0.000
#> GSM710810     1  0.0000    0.91896 1.000 0.000
#> GSM710811     1  0.0000    0.91896 1.000 0.000
#> GSM710812     1  0.0000    0.91896 1.000 0.000
#> GSM710821     1  0.2948    0.88167 0.948 0.052
#> GSM710822     1  0.0000    0.91896 1.000 0.000
#> GSM710823     1  0.6712    0.76559 0.824 0.176
#> GSM710824     2  0.4562    0.85604 0.096 0.904
#> GSM710825     1  0.8763    0.55745 0.704 0.296
#> GSM710826     1  0.0000    0.91896 1.000 0.000
#> GSM710827     1  0.0000    0.91896 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n specimen(p) individual(p) k
#> SD:NMF 49    6.13e-09         0.517 2
#> SD:NMF 44    4.88e-08         0.860 3
#> SD:NMF 44    2.06e-07         0.627 4
#> SD:NMF 26    5.39e-04         0.622 5
#> SD:NMF 31    3.18e-03         0.422 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "hclust"]
# you can also extract it by
# res = res_list["CV:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 5.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.206           0.785       0.855          0.313 0.693   0.693
#> 3 3 0.231           0.759       0.847          0.281 0.989   0.984
#> 4 4 0.276           0.385       0.689          0.356 0.832   0.754
#> 5 5 0.358           0.557       0.702          0.203 0.714   0.481
#> 6 6 0.547           0.609       0.775          0.136 0.875   0.629

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 5

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     1  0.7745      0.704 0.772 0.228
#> GSM710829     1  0.7745      0.710 0.772 0.228
#> GSM710839     1  0.7674      0.712 0.776 0.224
#> GSM710841     2  0.9896      0.560 0.440 0.560
#> GSM710843     1  0.7219      0.752 0.800 0.200
#> GSM710845     1  0.4161      0.848 0.916 0.084
#> GSM710846     2  0.9087      0.771 0.324 0.676
#> GSM710849     2  0.8763      0.792 0.296 0.704
#> GSM710853     2  0.7056      0.776 0.192 0.808
#> GSM710855     2  0.5737      0.733 0.136 0.864
#> GSM710858     2  0.6887      0.764 0.184 0.816
#> GSM710860     2  0.9661      0.624 0.392 0.608
#> GSM710801     1  0.8081      0.672 0.752 0.248
#> GSM710813     1  0.7219      0.749 0.800 0.200
#> GSM710814     1  0.7602      0.719 0.780 0.220
#> GSM710815     1  0.7219      0.751 0.800 0.200
#> GSM710816     1  0.5946      0.821 0.856 0.144
#> GSM710817     1  0.7453      0.713 0.788 0.212
#> GSM710818     1  0.7674      0.710 0.776 0.224
#> GSM710819     2  0.8499      0.704 0.276 0.724
#> GSM710820     2  0.8713      0.792 0.292 0.708
#> GSM710830     1  0.1184      0.853 0.984 0.016
#> GSM710831     1  0.7528      0.705 0.784 0.216
#> GSM710832     1  0.1184      0.853 0.984 0.016
#> GSM710833     1  0.6801      0.798 0.820 0.180
#> GSM710834     1  0.4022      0.849 0.920 0.080
#> GSM710835     1  0.5178      0.815 0.884 0.116
#> GSM710836     1  0.6148      0.766 0.848 0.152
#> GSM710837     1  0.4022      0.836 0.920 0.080
#> GSM710862     1  0.2948      0.855 0.948 0.052
#> GSM710863     1  0.1414      0.856 0.980 0.020
#> GSM710865     1  0.2236      0.858 0.964 0.036
#> GSM710867     1  0.1633      0.852 0.976 0.024
#> GSM710869     1  0.3733      0.848 0.928 0.072
#> GSM710871     1  0.1633      0.852 0.976 0.024
#> GSM710873     2  0.9795      0.580 0.416 0.584
#> GSM710802     1  0.5408      0.835 0.876 0.124
#> GSM710803     1  0.1184      0.853 0.984 0.016
#> GSM710804     1  0.7815      0.620 0.768 0.232
#> GSM710805     1  0.5842      0.830 0.860 0.140
#> GSM710806     1  0.2603      0.848 0.956 0.044
#> GSM710807     1  0.5178      0.821 0.884 0.116
#> GSM710808     1  0.1843      0.852 0.972 0.028
#> GSM710809     1  0.7528      0.695 0.784 0.216
#> GSM710810     1  0.2778      0.855 0.952 0.048
#> GSM710811     1  0.1843      0.852 0.972 0.028
#> GSM710812     1  0.2043      0.856 0.968 0.032
#> GSM710821     1  0.0376      0.853 0.996 0.004
#> GSM710822     1  0.4690      0.834 0.900 0.100
#> GSM710823     1  0.4431      0.836 0.908 0.092
#> GSM710824     1  0.4298      0.848 0.912 0.088
#> GSM710825     1  0.2778      0.854 0.952 0.048
#> GSM710826     1  0.1184      0.853 0.984 0.016
#> GSM710827     1  0.0376      0.853 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n specimen(p) individual(p) k
#> CV:hclust 54    9.21e-04        0.0798 2
#> CV:hclust 50    1.18e-02        0.4705 3
#> CV:hclust 33    8.90e-07        0.3745 4
#> CV:hclust 41    3.43e-07        0.4551 5
#> CV:hclust 40    2.80e-05        0.8422 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "kmeans"]
# you can also extract it by
# res = res_list["CV:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.799           0.842       0.931         0.4848 0.508   0.508
#> 3 3 0.503           0.547       0.792         0.3506 0.738   0.534
#> 4 4 0.561           0.517       0.734         0.1321 0.860   0.632
#> 5 5 0.599           0.505       0.695         0.0662 0.852   0.527
#> 6 6 0.622           0.520       0.693         0.0467 0.920   0.656

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.2236      0.902 0.036 0.964
#> GSM710829     2  0.1184      0.902 0.016 0.984
#> GSM710839     2  0.2236      0.902 0.036 0.964
#> GSM710841     2  0.0000      0.899 0.000 1.000
#> GSM710843     2  0.2236      0.902 0.036 0.964
#> GSM710845     1  0.9963     -0.003 0.536 0.464
#> GSM710846     2  0.0376      0.900 0.004 0.996
#> GSM710849     2  0.0000      0.899 0.000 1.000
#> GSM710853     2  0.0000      0.899 0.000 1.000
#> GSM710855     2  0.3114      0.872 0.056 0.944
#> GSM710858     2  0.0000      0.899 0.000 1.000
#> GSM710860     2  0.2236      0.902 0.036 0.964
#> GSM710801     2  0.2236      0.902 0.036 0.964
#> GSM710813     2  0.7602      0.748 0.220 0.780
#> GSM710814     2  0.2236      0.902 0.036 0.964
#> GSM710815     2  0.2236      0.902 0.036 0.964
#> GSM710816     2  0.8608      0.662 0.284 0.716
#> GSM710817     2  0.9732      0.363 0.404 0.596
#> GSM710818     2  0.2236      0.902 0.036 0.964
#> GSM710819     1  0.1843      0.930 0.972 0.028
#> GSM710820     2  0.0000      0.899 0.000 1.000
#> GSM710830     1  0.0376      0.935 0.996 0.004
#> GSM710831     2  0.9661      0.395 0.392 0.608
#> GSM710832     1  0.0000      0.935 1.000 0.000
#> GSM710833     1  0.1414      0.932 0.980 0.020
#> GSM710834     1  0.9866      0.119 0.568 0.432
#> GSM710835     1  0.2236      0.927 0.964 0.036
#> GSM710836     1  0.1843      0.930 0.972 0.028
#> GSM710837     1  0.2236      0.927 0.964 0.036
#> GSM710862     1  0.0000      0.935 1.000 0.000
#> GSM710863     1  0.0000      0.935 1.000 0.000
#> GSM710865     1  0.0000      0.935 1.000 0.000
#> GSM710867     1  0.2043      0.929 0.968 0.032
#> GSM710869     1  0.0376      0.935 0.996 0.004
#> GSM710871     1  0.2043      0.929 0.968 0.032
#> GSM710873     1  0.2236      0.927 0.964 0.036
#> GSM710802     1  0.0000      0.935 1.000 0.000
#> GSM710803     1  0.0000      0.935 1.000 0.000
#> GSM710804     2  0.3879      0.860 0.076 0.924
#> GSM710805     2  0.7602      0.720 0.220 0.780
#> GSM710806     1  0.2236      0.927 0.964 0.036
#> GSM710807     1  0.2236      0.927 0.964 0.036
#> GSM710808     1  0.0000      0.935 1.000 0.000
#> GSM710809     1  0.2236      0.927 0.964 0.036
#> GSM710810     1  0.0000      0.935 1.000 0.000
#> GSM710811     1  0.0376      0.935 0.996 0.004
#> GSM710812     1  0.0000      0.935 1.000 0.000
#> GSM710821     1  0.0000      0.935 1.000 0.000
#> GSM710822     1  0.1843      0.930 0.972 0.028
#> GSM710823     1  0.0376      0.935 0.996 0.004
#> GSM710824     1  0.9686      0.242 0.604 0.396
#> GSM710825     1  0.3114      0.885 0.944 0.056
#> GSM710826     1  0.0376      0.935 0.996 0.004
#> GSM710827     1  0.0000      0.935 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n specimen(p) individual(p) k
#> CV:kmeans 49    6.13e-09         0.682 2
#> CV:kmeans 38    2.55e-08         0.571 3
#> CV:kmeans 32    2.92e-06         0.810 4
#> CV:kmeans 40    5.30e-06         0.642 5
#> CV:kmeans 35    1.07e-04         0.775 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "skmeans"]
# you can also extract it by
# res = res_list["CV:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.885           0.947       0.974         0.5045 0.493   0.493
#> 3 3 0.462           0.551       0.789         0.3305 0.785   0.587
#> 4 4 0.455           0.475       0.626         0.1223 0.821   0.541
#> 5 5 0.508           0.334       0.583         0.0648 0.825   0.453
#> 6 6 0.551           0.301       0.584         0.0413 0.932   0.680

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000      0.962 0.000 1.000
#> GSM710829     2  0.0000      0.962 0.000 1.000
#> GSM710839     2  0.0000      0.962 0.000 1.000
#> GSM710841     2  0.0000      0.962 0.000 1.000
#> GSM710843     2  0.0000      0.962 0.000 1.000
#> GSM710845     2  0.6801      0.793 0.180 0.820
#> GSM710846     2  0.0000      0.962 0.000 1.000
#> GSM710849     2  0.0000      0.962 0.000 1.000
#> GSM710853     2  0.0000      0.962 0.000 1.000
#> GSM710855     2  0.1633      0.948 0.024 0.976
#> GSM710858     2  0.0000      0.962 0.000 1.000
#> GSM710860     2  0.0000      0.962 0.000 1.000
#> GSM710801     2  0.0000      0.962 0.000 1.000
#> GSM710813     2  0.0000      0.962 0.000 1.000
#> GSM710814     2  0.0000      0.962 0.000 1.000
#> GSM710815     2  0.0000      0.962 0.000 1.000
#> GSM710816     2  0.0000      0.962 0.000 1.000
#> GSM710817     2  0.5408      0.862 0.124 0.876
#> GSM710818     2  0.0376      0.960 0.004 0.996
#> GSM710819     1  0.1843      0.962 0.972 0.028
#> GSM710820     2  0.0000      0.962 0.000 1.000
#> GSM710830     1  0.0000      0.980 1.000 0.000
#> GSM710831     2  0.3274      0.922 0.060 0.940
#> GSM710832     1  0.0000      0.980 1.000 0.000
#> GSM710833     1  0.6048      0.831 0.852 0.148
#> GSM710834     2  0.7528      0.746 0.216 0.784
#> GSM710835     1  0.0672      0.976 0.992 0.008
#> GSM710836     1  0.0000      0.980 1.000 0.000
#> GSM710837     1  0.0000      0.980 1.000 0.000
#> GSM710862     1  0.0376      0.978 0.996 0.004
#> GSM710863     1  0.0000      0.980 1.000 0.000
#> GSM710865     1  0.0000      0.980 1.000 0.000
#> GSM710867     1  0.0000      0.980 1.000 0.000
#> GSM710869     1  0.0000      0.980 1.000 0.000
#> GSM710871     1  0.0000      0.980 1.000 0.000
#> GSM710873     1  0.0000      0.980 1.000 0.000
#> GSM710802     1  0.1843      0.962 0.972 0.028
#> GSM710803     1  0.0000      0.980 1.000 0.000
#> GSM710804     2  0.0938      0.955 0.012 0.988
#> GSM710805     2  0.0000      0.962 0.000 1.000
#> GSM710806     1  0.0376      0.978 0.996 0.004
#> GSM710807     1  0.0000      0.980 1.000 0.000
#> GSM710808     1  0.0000      0.980 1.000 0.000
#> GSM710809     1  0.0000      0.980 1.000 0.000
#> GSM710810     1  0.3274      0.936 0.940 0.060
#> GSM710811     1  0.0000      0.980 1.000 0.000
#> GSM710812     1  0.0000      0.980 1.000 0.000
#> GSM710821     1  0.1414      0.968 0.980 0.020
#> GSM710822     1  0.0000      0.980 1.000 0.000
#> GSM710823     1  0.6438      0.804 0.836 0.164
#> GSM710824     2  0.8327      0.668 0.264 0.736
#> GSM710825     1  0.3584      0.928 0.932 0.068
#> GSM710826     1  0.0000      0.980 1.000 0.000
#> GSM710827     1  0.0000      0.980 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> CV:skmeans 54    4.40e-08        0.5852 2
#> CV:skmeans 42    1.57e-08        0.7811 3
#> CV:skmeans 28    2.96e-05        0.2840 4
#> CV:skmeans 14    9.12e-04        0.0798 5
#> CV:skmeans 11    6.76e-03        0.3907 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "pam"]
# you can also extract it by
# res = res_list["CV:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.577           0.857       0.931         0.5061 0.491   0.491
#> 3 3 0.398           0.580       0.715         0.2775 0.848   0.703
#> 4 4 0.559           0.553       0.779         0.1585 0.817   0.548
#> 5 5 0.609           0.626       0.763         0.0596 0.918   0.689
#> 6 6 0.628           0.529       0.731         0.0369 0.964   0.828

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0376     0.9491 0.004 0.996
#> GSM710829     2  0.0000     0.9507 0.000 1.000
#> GSM710839     2  0.0376     0.9491 0.004 0.996
#> GSM710841     2  0.0000     0.9507 0.000 1.000
#> GSM710843     2  0.0000     0.9507 0.000 1.000
#> GSM710845     1  0.9000     0.6141 0.684 0.316
#> GSM710846     2  0.0000     0.9507 0.000 1.000
#> GSM710849     2  0.0000     0.9507 0.000 1.000
#> GSM710853     2  0.0000     0.9507 0.000 1.000
#> GSM710855     2  0.2423     0.9232 0.040 0.960
#> GSM710858     2  0.0000     0.9507 0.000 1.000
#> GSM710860     2  0.0000     0.9507 0.000 1.000
#> GSM710801     2  0.0000     0.9507 0.000 1.000
#> GSM710813     2  0.0000     0.9507 0.000 1.000
#> GSM710814     2  0.0000     0.9507 0.000 1.000
#> GSM710815     2  0.0000     0.9507 0.000 1.000
#> GSM710816     2  0.4562     0.8843 0.096 0.904
#> GSM710817     2  0.0376     0.9490 0.004 0.996
#> GSM710818     1  0.9580     0.4885 0.620 0.380
#> GSM710819     1  0.6973     0.7823 0.812 0.188
#> GSM710820     2  0.0000     0.9507 0.000 1.000
#> GSM710830     1  0.9996     0.0333 0.512 0.488
#> GSM710831     2  0.0376     0.9490 0.004 0.996
#> GSM710832     1  0.0000     0.8942 1.000 0.000
#> GSM710833     2  0.4815     0.8710 0.104 0.896
#> GSM710834     2  0.6712     0.8015 0.176 0.824
#> GSM710835     2  0.6148     0.8248 0.152 0.848
#> GSM710836     1  0.5178     0.8377 0.884 0.116
#> GSM710837     1  0.0000     0.8942 1.000 0.000
#> GSM710862     1  0.1843     0.8877 0.972 0.028
#> GSM710863     1  0.0000     0.8942 1.000 0.000
#> GSM710865     1  0.0000     0.8942 1.000 0.000
#> GSM710867     1  0.0000     0.8942 1.000 0.000
#> GSM710869     1  0.1414     0.8904 0.980 0.020
#> GSM710871     1  0.0000     0.8942 1.000 0.000
#> GSM710873     1  0.4690     0.8438 0.900 0.100
#> GSM710802     1  0.9248     0.5250 0.660 0.340
#> GSM710803     1  0.0000     0.8942 1.000 0.000
#> GSM710804     2  0.0000     0.9507 0.000 1.000
#> GSM710805     2  0.0000     0.9507 0.000 1.000
#> GSM710806     2  0.5408     0.8552 0.124 0.876
#> GSM710807     1  0.2603     0.8839 0.956 0.044
#> GSM710808     1  0.0000     0.8942 1.000 0.000
#> GSM710809     1  0.7950     0.7277 0.760 0.240
#> GSM710810     1  0.2423     0.8843 0.960 0.040
#> GSM710811     1  0.0376     0.8935 0.996 0.004
#> GSM710812     1  0.0000     0.8942 1.000 0.000
#> GSM710821     1  0.7139     0.7536 0.804 0.196
#> GSM710822     1  0.0000     0.8942 1.000 0.000
#> GSM710823     2  0.8499     0.6190 0.276 0.724
#> GSM710824     2  0.6343     0.8096 0.160 0.840
#> GSM710825     1  0.0000     0.8942 1.000 0.000
#> GSM710826     1  0.4562     0.8465 0.904 0.096
#> GSM710827     1  0.0000     0.8942 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n specimen(p) individual(p) k
#> CV:pam 52    4.92e-05         0.579 2
#> CV:pam 44    9.39e-06         0.888 3
#> CV:pam 37    4.81e-05         0.418 4
#> CV:pam 44    5.84e-06         0.644 5
#> CV:pam 39    1.25e-05         0.571 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "mclust"]
# you can also extract it by
# res = res_list["CV:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 4.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.466           0.804       0.840         0.4238 0.497   0.497
#> 3 3 0.404           0.583       0.797         0.4841 0.657   0.418
#> 4 4 0.631           0.732       0.852         0.1484 0.800   0.511
#> 5 5 0.635           0.610       0.703         0.0801 0.891   0.640
#> 6 6 0.638           0.570       0.750         0.0484 0.892   0.555

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 4

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.3733      0.789 0.072 0.928
#> GSM710829     2  0.7056      0.838 0.192 0.808
#> GSM710839     2  0.0376      0.751 0.004 0.996
#> GSM710841     2  0.7376      0.835 0.208 0.792
#> GSM710843     2  0.4431      0.799 0.092 0.908
#> GSM710845     2  0.9866      0.509 0.432 0.568
#> GSM710846     2  0.7056      0.838 0.192 0.808
#> GSM710849     2  0.7376      0.835 0.208 0.792
#> GSM710853     2  0.7056      0.838 0.192 0.808
#> GSM710855     2  0.7602      0.828 0.220 0.780
#> GSM710858     2  0.7056      0.838 0.192 0.808
#> GSM710860     2  0.0376      0.751 0.004 0.996
#> GSM710801     2  0.0376      0.751 0.004 0.996
#> GSM710813     2  0.8016      0.804 0.244 0.756
#> GSM710814     2  0.0376      0.751 0.004 0.996
#> GSM710815     2  0.0376      0.751 0.004 0.996
#> GSM710816     2  0.9522      0.619 0.372 0.628
#> GSM710817     2  0.9933      0.454 0.452 0.548
#> GSM710818     2  0.7376      0.835 0.208 0.792
#> GSM710819     1  0.4298      0.910 0.912 0.088
#> GSM710820     2  0.7056      0.838 0.192 0.808
#> GSM710830     1  0.0376      0.890 0.996 0.004
#> GSM710831     2  0.9866      0.509 0.432 0.568
#> GSM710832     1  0.0000      0.894 1.000 0.000
#> GSM710833     1  0.4298      0.910 0.912 0.088
#> GSM710834     1  0.9983     -0.242 0.524 0.476
#> GSM710835     1  0.4298      0.910 0.912 0.088
#> GSM710836     1  0.4298      0.910 0.912 0.088
#> GSM710837     1  0.4298      0.910 0.912 0.088
#> GSM710862     1  0.4022      0.911 0.920 0.080
#> GSM710863     1  0.0376      0.897 0.996 0.004
#> GSM710865     1  0.1414      0.903 0.980 0.020
#> GSM710867     1  0.1843      0.905 0.972 0.028
#> GSM710869     1  0.4298      0.910 0.912 0.088
#> GSM710871     1  0.0672      0.899 0.992 0.008
#> GSM710873     1  0.4298      0.910 0.912 0.088
#> GSM710802     1  0.3879      0.911 0.924 0.076
#> GSM710803     1  0.0376      0.897 0.996 0.004
#> GSM710804     2  0.7376      0.835 0.208 0.792
#> GSM710805     2  0.7602      0.828 0.220 0.780
#> GSM710806     1  0.4298      0.910 0.912 0.088
#> GSM710807     1  0.4298      0.910 0.912 0.088
#> GSM710808     1  0.0376      0.890 0.996 0.004
#> GSM710809     1  0.4298      0.910 0.912 0.088
#> GSM710810     1  0.5178      0.872 0.884 0.116
#> GSM710811     1  0.0000      0.894 1.000 0.000
#> GSM710812     1  0.0376      0.897 0.996 0.004
#> GSM710821     1  0.2778      0.909 0.952 0.048
#> GSM710822     1  0.4298      0.910 0.912 0.088
#> GSM710823     1  0.8499      0.573 0.724 0.276
#> GSM710824     2  0.9922      0.466 0.448 0.552
#> GSM710825     1  0.6247      0.826 0.844 0.156
#> GSM710826     1  0.0672      0.894 0.992 0.008
#> GSM710827     1  0.0376      0.897 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n specimen(p) individual(p) k
#> CV:mclust 51    1.08e-08         0.468 2
#> CV:mclust 37    4.64e-07         0.782 3
#> CV:mclust 48    6.30e-07         0.941 4
#> CV:mclust 41    8.29e-07         0.826 5
#> CV:mclust 40    3.89e-06         0.902 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "NMF"]
# you can also extract it by
# res = res_list["CV:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.885           0.912       0.964         0.4870 0.508   0.508
#> 3 3 0.481           0.662       0.810         0.3683 0.751   0.541
#> 4 4 0.528           0.639       0.797         0.1271 0.801   0.487
#> 5 5 0.611           0.621       0.783         0.0712 0.858   0.511
#> 6 6 0.641           0.444       0.702         0.0399 0.869   0.455

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000      0.943 0.000 1.000
#> GSM710829     2  0.0000      0.943 0.000 1.000
#> GSM710839     2  0.0000      0.943 0.000 1.000
#> GSM710841     2  0.0000      0.943 0.000 1.000
#> GSM710843     2  0.0000      0.943 0.000 1.000
#> GSM710845     2  0.7299      0.749 0.204 0.796
#> GSM710846     2  0.0000      0.943 0.000 1.000
#> GSM710849     2  0.0000      0.943 0.000 1.000
#> GSM710853     2  0.0000      0.943 0.000 1.000
#> GSM710855     2  0.0000      0.943 0.000 1.000
#> GSM710858     2  0.0000      0.943 0.000 1.000
#> GSM710860     2  0.0000      0.943 0.000 1.000
#> GSM710801     2  0.0000      0.943 0.000 1.000
#> GSM710813     2  0.2043      0.924 0.032 0.968
#> GSM710814     2  0.0000      0.943 0.000 1.000
#> GSM710815     2  0.0000      0.943 0.000 1.000
#> GSM710816     2  0.3114      0.906 0.056 0.944
#> GSM710817     1  0.0000      0.973 1.000 0.000
#> GSM710818     2  0.0000      0.943 0.000 1.000
#> GSM710819     1  0.0000      0.973 1.000 0.000
#> GSM710820     2  0.0000      0.943 0.000 1.000
#> GSM710830     1  0.0000      0.973 1.000 0.000
#> GSM710831     1  0.8955      0.508 0.688 0.312
#> GSM710832     1  0.0000      0.973 1.000 0.000
#> GSM710833     1  0.0000      0.973 1.000 0.000
#> GSM710834     1  0.9815      0.214 0.580 0.420
#> GSM710835     1  0.0000      0.973 1.000 0.000
#> GSM710836     1  0.0000      0.973 1.000 0.000
#> GSM710837     1  0.0000      0.973 1.000 0.000
#> GSM710862     1  0.0000      0.973 1.000 0.000
#> GSM710863     1  0.0000      0.973 1.000 0.000
#> GSM710865     1  0.0000      0.973 1.000 0.000
#> GSM710867     1  0.0000      0.973 1.000 0.000
#> GSM710869     1  0.0000      0.973 1.000 0.000
#> GSM710871     1  0.0000      0.973 1.000 0.000
#> GSM710873     1  0.0000      0.973 1.000 0.000
#> GSM710802     1  0.0000      0.973 1.000 0.000
#> GSM710803     1  0.0000      0.973 1.000 0.000
#> GSM710804     2  0.8909      0.572 0.308 0.692
#> GSM710805     2  0.5519      0.841 0.128 0.872
#> GSM710806     1  0.0000      0.973 1.000 0.000
#> GSM710807     1  0.0000      0.973 1.000 0.000
#> GSM710808     1  0.0000      0.973 1.000 0.000
#> GSM710809     1  0.0000      0.973 1.000 0.000
#> GSM710810     1  0.0000      0.973 1.000 0.000
#> GSM710811     1  0.0000      0.973 1.000 0.000
#> GSM710812     1  0.0000      0.973 1.000 0.000
#> GSM710821     1  0.0000      0.973 1.000 0.000
#> GSM710822     1  0.0000      0.973 1.000 0.000
#> GSM710823     1  0.2423      0.933 0.960 0.040
#> GSM710824     2  0.9833      0.291 0.424 0.576
#> GSM710825     1  0.0376      0.969 0.996 0.004
#> GSM710826     1  0.0000      0.973 1.000 0.000
#> GSM710827     1  0.0000      0.973 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n specimen(p) individual(p) k
#> CV:NMF 52    7.88e-09         0.572 2
#> CV:NMF 45    5.10e-08         0.497 3
#> CV:NMF 45    1.96e-07         0.601 4
#> CV:NMF 43    2.99e-06         0.255 5
#> CV:NMF 22    3.58e-02         0.209 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "hclust"]
# you can also extract it by
# res = res_list["MAD:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 4.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.201           0.600       0.785         0.3782 0.770   0.770
#> 3 3 0.325           0.705       0.779         0.5856 0.665   0.564
#> 4 4 0.619           0.703       0.846         0.2183 0.827   0.603
#> 5 5 0.620           0.523       0.786         0.0632 0.990   0.963
#> 6 6 0.669           0.604       0.710         0.0532 0.938   0.757

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 4

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.7950     0.6145 0.240 0.760
#> GSM710829     2  0.8861     0.5143 0.304 0.696
#> GSM710839     2  0.7883     0.6252 0.236 0.764
#> GSM710841     2  0.8861     0.5143 0.304 0.696
#> GSM710843     2  0.5629     0.6862 0.132 0.868
#> GSM710845     2  0.7950     0.6111 0.240 0.760
#> GSM710846     2  0.9754     0.4569 0.408 0.592
#> GSM710849     2  0.8861     0.5143 0.304 0.696
#> GSM710853     2  0.8955     0.5079 0.312 0.688
#> GSM710855     1  0.7883     0.6661 0.764 0.236
#> GSM710858     2  0.9087     0.5096 0.324 0.676
#> GSM710860     2  0.7883     0.6252 0.236 0.764
#> GSM710801     2  0.8861     0.5182 0.304 0.696
#> GSM710813     2  0.8909     0.5133 0.308 0.692
#> GSM710814     2  0.7883     0.6249 0.236 0.764
#> GSM710815     2  0.5294     0.6904 0.120 0.880
#> GSM710816     2  0.7950     0.6111 0.240 0.760
#> GSM710817     2  0.8909     0.5084 0.308 0.692
#> GSM710818     2  0.7883     0.6148 0.236 0.764
#> GSM710819     1  0.0672     0.7583 0.992 0.008
#> GSM710820     2  0.8861     0.5143 0.304 0.696
#> GSM710830     2  0.0376     0.6985 0.004 0.996
#> GSM710831     2  0.8955     0.5086 0.312 0.688
#> GSM710832     2  0.0376     0.6985 0.004 0.996
#> GSM710833     1  0.4022     0.8136 0.920 0.080
#> GSM710834     2  0.7299     0.6399 0.204 0.796
#> GSM710835     2  0.8861     0.5143 0.304 0.696
#> GSM710836     1  0.3879     0.8112 0.924 0.076
#> GSM710837     2  0.9000     0.3769 0.316 0.684
#> GSM710862     2  0.6623     0.6599 0.172 0.828
#> GSM710863     2  0.2043     0.7031 0.032 0.968
#> GSM710865     2  0.3733     0.7003 0.072 0.928
#> GSM710867     2  0.5737     0.6395 0.136 0.864
#> GSM710869     1  0.9881     0.0168 0.564 0.436
#> GSM710871     2  0.5737     0.6395 0.136 0.864
#> GSM710873     1  0.4562     0.8134 0.904 0.096
#> GSM710802     2  0.5294     0.6904 0.120 0.880
#> GSM710803     2  0.0376     0.6985 0.004 0.996
#> GSM710804     2  0.8861     0.5143 0.304 0.696
#> GSM710805     2  0.9833     0.4797 0.424 0.576
#> GSM710806     2  0.8267     0.5539 0.260 0.740
#> GSM710807     2  0.9000     0.3769 0.316 0.684
#> GSM710808     2  0.0938     0.7005 0.012 0.988
#> GSM710809     1  0.6623     0.7644 0.828 0.172
#> GSM710810     2  0.6887     0.6509 0.184 0.816
#> GSM710811     2  0.2948     0.6882 0.052 0.948
#> GSM710812     2  0.2948     0.6882 0.052 0.948
#> GSM710821     2  0.3431     0.6987 0.064 0.936
#> GSM710822     2  0.9833     0.3660 0.424 0.576
#> GSM710823     2  0.9833     0.3660 0.424 0.576
#> GSM710824     2  0.7883     0.6149 0.236 0.764
#> GSM710825     2  0.3431     0.6987 0.064 0.936
#> GSM710826     2  0.0376     0.6985 0.004 0.996
#> GSM710827     2  0.1843     0.7023 0.028 0.972

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> MAD:hclust 47     0.96249         0.703 2
#> MAD:hclust 49     0.08180         0.652 3
#> MAD:hclust 45     0.00064         0.911 4
#> MAD:hclust 34     0.00220         0.644 5
#> MAD:hclust 42     0.00143         0.474 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "kmeans"]
# you can also extract it by
# res = res_list["MAD:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.492           0.774       0.890         0.5043 0.497   0.497
#> 3 3 0.474           0.301       0.604         0.3198 0.709   0.477
#> 4 4 0.584           0.722       0.818         0.1311 0.769   0.420
#> 5 5 0.628           0.534       0.735         0.0650 0.946   0.786
#> 6 6 0.654           0.473       0.658         0.0405 0.916   0.630

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.4298      0.865 0.088 0.912
#> GSM710829     2  0.0000      0.875 0.000 1.000
#> GSM710839     2  0.4298      0.865 0.088 0.912
#> GSM710841     2  0.2423      0.869 0.040 0.960
#> GSM710843     2  0.3431      0.872 0.064 0.936
#> GSM710845     2  0.6801      0.793 0.180 0.820
#> GSM710846     2  0.0000      0.875 0.000 1.000
#> GSM710849     2  0.4161      0.844 0.084 0.916
#> GSM710853     2  0.0000      0.875 0.000 1.000
#> GSM710855     1  0.9993      0.241 0.516 0.484
#> GSM710858     2  0.0376      0.873 0.004 0.996
#> GSM710860     2  0.4298      0.865 0.088 0.912
#> GSM710801     2  0.0000      0.875 0.000 1.000
#> GSM710813     2  0.0000      0.875 0.000 1.000
#> GSM710814     2  0.4298      0.865 0.088 0.912
#> GSM710815     2  0.3431      0.872 0.064 0.936
#> GSM710816     2  0.4298      0.865 0.088 0.912
#> GSM710817     2  0.7056      0.740 0.192 0.808
#> GSM710818     1  0.9850      0.263 0.572 0.428
#> GSM710819     1  0.8955      0.630 0.688 0.312
#> GSM710820     2  0.0000      0.875 0.000 1.000
#> GSM710830     1  0.0376      0.861 0.996 0.004
#> GSM710831     2  0.5178      0.813 0.116 0.884
#> GSM710832     1  0.0376      0.861 0.996 0.004
#> GSM710833     1  0.8955      0.630 0.688 0.312
#> GSM710834     2  0.8861      0.641 0.304 0.696
#> GSM710835     2  0.9754      0.279 0.408 0.592
#> GSM710836     1  0.4562      0.826 0.904 0.096
#> GSM710837     1  0.3879      0.833 0.924 0.076
#> GSM710862     1  0.3584      0.833 0.932 0.068
#> GSM710863     1  0.0376      0.861 0.996 0.004
#> GSM710865     1  0.0376      0.861 0.996 0.004
#> GSM710867     1  0.1843      0.854 0.972 0.028
#> GSM710869     1  0.2603      0.855 0.956 0.044
#> GSM710871     1  0.0376      0.861 0.996 0.004
#> GSM710873     1  0.4298      0.826 0.912 0.088
#> GSM710802     1  0.0376      0.861 0.996 0.004
#> GSM710803     1  0.0376      0.861 0.996 0.004
#> GSM710804     2  0.8713      0.579 0.292 0.708
#> GSM710805     2  0.2043      0.872 0.032 0.968
#> GSM710806     1  0.9970      0.130 0.532 0.468
#> GSM710807     1  0.3879      0.833 0.924 0.076
#> GSM710808     1  0.5842      0.757 0.860 0.140
#> GSM710809     1  0.5629      0.803 0.868 0.132
#> GSM710810     1  0.2043      0.852 0.968 0.032
#> GSM710811     1  0.0376      0.861 0.996 0.004
#> GSM710812     1  0.0376      0.861 0.996 0.004
#> GSM710821     1  0.7745      0.629 0.772 0.228
#> GSM710822     1  0.0376      0.860 0.996 0.004
#> GSM710823     1  0.7139      0.709 0.804 0.196
#> GSM710824     2  0.7139      0.775 0.196 0.804
#> GSM710825     1  0.8555      0.554 0.720 0.280
#> GSM710826     1  0.0376      0.861 0.996 0.004
#> GSM710827     1  0.0376      0.861 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> MAD:kmeans 50    3.04e-07         0.570 2
#> MAD:kmeans 23    1.07e-03         0.636 3
#> MAD:kmeans 50    2.08e-05         0.735 4
#> MAD:kmeans 40    1.31e-04         0.941 5
#> MAD:kmeans 34    1.95e-04         0.977 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "skmeans"]
# you can also extract it by
# res = res_list["MAD:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.684           0.847       0.935         0.5088 0.493   0.493
#> 3 3 0.596           0.551       0.743         0.3225 0.706   0.476
#> 4 4 0.638           0.780       0.831         0.1272 0.792   0.472
#> 5 5 0.604           0.476       0.650         0.0623 0.885   0.588
#> 6 6 0.634           0.472       0.665         0.0373 0.898   0.593

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000      0.940 0.000 1.000
#> GSM710829     2  0.0000      0.940 0.000 1.000
#> GSM710839     2  0.0000      0.940 0.000 1.000
#> GSM710841     2  0.0000      0.940 0.000 1.000
#> GSM710843     2  0.0000      0.940 0.000 1.000
#> GSM710845     2  0.1633      0.922 0.024 0.976
#> GSM710846     2  0.0000      0.940 0.000 1.000
#> GSM710849     2  0.0376      0.938 0.004 0.996
#> GSM710853     2  0.0000      0.940 0.000 1.000
#> GSM710855     1  0.9944      0.255 0.544 0.456
#> GSM710858     2  0.0000      0.940 0.000 1.000
#> GSM710860     2  0.0000      0.940 0.000 1.000
#> GSM710801     2  0.0000      0.940 0.000 1.000
#> GSM710813     2  0.0000      0.940 0.000 1.000
#> GSM710814     2  0.0000      0.940 0.000 1.000
#> GSM710815     2  0.0000      0.940 0.000 1.000
#> GSM710816     2  0.0000      0.940 0.000 1.000
#> GSM710817     2  0.0376      0.938 0.004 0.996
#> GSM710818     1  0.9815      0.351 0.580 0.420
#> GSM710819     1  0.7602      0.719 0.780 0.220
#> GSM710820     2  0.0000      0.940 0.000 1.000
#> GSM710830     1  0.0000      0.911 1.000 0.000
#> GSM710831     2  0.0000      0.940 0.000 1.000
#> GSM710832     1  0.0000      0.911 1.000 0.000
#> GSM710833     1  0.7950      0.694 0.760 0.240
#> GSM710834     2  0.7528      0.707 0.216 0.784
#> GSM710835     2  0.9552      0.400 0.376 0.624
#> GSM710836     1  0.0000      0.911 1.000 0.000
#> GSM710837     1  0.0000      0.911 1.000 0.000
#> GSM710862     1  0.0376      0.909 0.996 0.004
#> GSM710863     1  0.0000      0.911 1.000 0.000
#> GSM710865     1  0.0000      0.911 1.000 0.000
#> GSM710867     1  0.0000      0.911 1.000 0.000
#> GSM710869     1  0.0000      0.911 1.000 0.000
#> GSM710871     1  0.0000      0.911 1.000 0.000
#> GSM710873     1  0.0000      0.911 1.000 0.000
#> GSM710802     1  0.0000      0.911 1.000 0.000
#> GSM710803     1  0.0000      0.911 1.000 0.000
#> GSM710804     2  0.7219      0.727 0.200 0.800
#> GSM710805     2  0.0000      0.940 0.000 1.000
#> GSM710806     2  0.9815      0.291 0.420 0.580
#> GSM710807     1  0.0000      0.911 1.000 0.000
#> GSM710808     1  0.2043      0.890 0.968 0.032
#> GSM710809     1  0.8327      0.616 0.736 0.264
#> GSM710810     1  0.0376      0.909 0.996 0.004
#> GSM710811     1  0.0000      0.911 1.000 0.000
#> GSM710812     1  0.0000      0.911 1.000 0.000
#> GSM710821     1  0.7219      0.718 0.800 0.200
#> GSM710822     1  0.0000      0.911 1.000 0.000
#> GSM710823     1  0.7219      0.739 0.800 0.200
#> GSM710824     2  0.1414      0.926 0.020 0.980
#> GSM710825     1  0.7219      0.718 0.800 0.200
#> GSM710826     1  0.0000      0.911 1.000 0.000
#> GSM710827     1  0.0000      0.911 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>              n specimen(p) individual(p) k
#> MAD:skmeans 50    3.04e-07         0.570 2
#> MAD:skmeans 45    2.70e-05         0.995 3
#> MAD:skmeans 50    1.71e-05         0.876 4
#> MAD:skmeans 26    2.26e-01         0.884 5
#> MAD:skmeans 35    5.54e-04         0.799 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "pam"]
# you can also extract it by
# res = res_list["MAD:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.689           0.872       0.942         0.5057 0.493   0.493
#> 3 3 0.552           0.536       0.742         0.3074 0.736   0.513
#> 4 4 0.560           0.581       0.791         0.1292 0.824   0.527
#> 5 5 0.704           0.392       0.721         0.0679 0.790   0.386
#> 6 6 0.728           0.586       0.797         0.0435 0.886   0.562

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.5178    0.84314 0.116 0.884
#> GSM710829     2  0.0000    0.93562 0.000 1.000
#> GSM710839     2  0.0000    0.93562 0.000 1.000
#> GSM710841     2  0.0938    0.93126 0.012 0.988
#> GSM710843     2  0.0000    0.93562 0.000 1.000
#> GSM710845     1  0.8661    0.57966 0.712 0.288
#> GSM710846     2  0.0000    0.93562 0.000 1.000
#> GSM710849     2  0.0376    0.93434 0.004 0.996
#> GSM710853     2  0.0000    0.93562 0.000 1.000
#> GSM710855     1  0.9608    0.45421 0.616 0.384
#> GSM710858     2  0.0000    0.93562 0.000 1.000
#> GSM710860     2  0.0000    0.93562 0.000 1.000
#> GSM710801     2  0.0000    0.93562 0.000 1.000
#> GSM710813     2  0.0000    0.93562 0.000 1.000
#> GSM710814     2  0.0000    0.93562 0.000 1.000
#> GSM710815     2  0.0000    0.93562 0.000 1.000
#> GSM710816     2  0.2236    0.91623 0.036 0.964
#> GSM710817     2  0.0000    0.93562 0.000 1.000
#> GSM710818     1  0.8144    0.69341 0.748 0.252
#> GSM710819     1  0.4431    0.86832 0.908 0.092
#> GSM710820     2  0.0000    0.93562 0.000 1.000
#> GSM710830     1  0.0000    0.93427 1.000 0.000
#> GSM710831     2  0.0000    0.93562 0.000 1.000
#> GSM710832     1  0.0000    0.93427 1.000 0.000
#> GSM710833     1  0.9635    0.43078 0.612 0.388
#> GSM710834     2  0.6048    0.83944 0.148 0.852
#> GSM710835     2  0.5737    0.85251 0.136 0.864
#> GSM710836     1  0.2603    0.90494 0.956 0.044
#> GSM710837     1  0.0000    0.93427 1.000 0.000
#> GSM710862     1  0.0376    0.93243 0.996 0.004
#> GSM710863     1  0.0000    0.93427 1.000 0.000
#> GSM710865     1  0.0000    0.93427 1.000 0.000
#> GSM710867     1  0.0000    0.93427 1.000 0.000
#> GSM710869     1  0.0000    0.93427 1.000 0.000
#> GSM710871     1  0.0000    0.93427 1.000 0.000
#> GSM710873     1  0.3431    0.88885 0.936 0.064
#> GSM710802     1  0.0000    0.93427 1.000 0.000
#> GSM710803     1  0.0000    0.93427 1.000 0.000
#> GSM710804     2  0.5629    0.85558 0.132 0.868
#> GSM710805     2  0.3733    0.89792 0.072 0.928
#> GSM710806     2  0.5737    0.85251 0.136 0.864
#> GSM710807     1  0.0000    0.93427 1.000 0.000
#> GSM710808     1  0.0000    0.93427 1.000 0.000
#> GSM710809     2  0.5629    0.85621 0.132 0.868
#> GSM710810     1  0.0000    0.93427 1.000 0.000
#> GSM710811     1  0.0000    0.93427 1.000 0.000
#> GSM710812     1  0.0000    0.93427 1.000 0.000
#> GSM710821     1  0.0376    0.93258 0.996 0.004
#> GSM710822     1  0.2778    0.90404 0.952 0.048
#> GSM710823     1  0.6438    0.80693 0.836 0.164
#> GSM710824     2  0.9963    0.00859 0.464 0.536
#> GSM710825     1  0.0376    0.93258 0.996 0.004
#> GSM710826     1  0.0000    0.93427 1.000 0.000
#> GSM710827     1  0.0000    0.93427 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n specimen(p) individual(p) k
#> MAD:pam 51    4.64e-05         0.680 2
#> MAD:pam 33    9.36e-06         0.305 3
#> MAD:pam 40    7.78e-05         0.419 4
#> MAD:pam 29    5.05e-04         0.543 5
#> MAD:pam 42    3.64e-05         0.789 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "mclust"]
# you can also extract it by
# res = res_list["MAD:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.413           0.850       0.886         0.4579 0.497   0.497
#> 3 3 0.309           0.667       0.725         0.2854 0.797   0.642
#> 4 4 0.616           0.756       0.858         0.2342 0.743   0.455
#> 5 5 0.534           0.536       0.682         0.0569 0.897   0.637
#> 6 6 0.729           0.610       0.781         0.0775 0.859   0.460

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0938     0.9515 0.012 0.988
#> GSM710829     2  0.0938     0.9514 0.012 0.988
#> GSM710839     2  0.0000     0.9482 0.000 1.000
#> GSM710841     2  0.2423     0.9418 0.040 0.960
#> GSM710843     2  0.0376     0.9511 0.004 0.996
#> GSM710845     2  0.5629     0.8427 0.132 0.868
#> GSM710846     2  0.0376     0.9511 0.004 0.996
#> GSM710849     2  0.2423     0.9418 0.040 0.960
#> GSM710853     2  0.0376     0.9511 0.004 0.996
#> GSM710855     1  0.9661     0.6064 0.608 0.392
#> GSM710858     2  0.0672     0.9516 0.008 0.992
#> GSM710860     2  0.0000     0.9482 0.000 1.000
#> GSM710801     2  0.0376     0.9511 0.004 0.996
#> GSM710813     2  0.0376     0.9511 0.004 0.996
#> GSM710814     2  0.0000     0.9482 0.000 1.000
#> GSM710815     2  0.0376     0.9511 0.004 0.996
#> GSM710816     2  0.0938     0.9515 0.012 0.988
#> GSM710817     2  0.2603     0.9393 0.044 0.956
#> GSM710818     1  0.9358     0.6808 0.648 0.352
#> GSM710819     1  0.6973     0.8730 0.812 0.188
#> GSM710820     2  0.0376     0.9511 0.004 0.996
#> GSM710830     1  0.0376     0.7956 0.996 0.004
#> GSM710831     2  0.2778     0.9364 0.048 0.952
#> GSM710832     1  0.0000     0.7952 1.000 0.000
#> GSM710833     1  0.7299     0.8633 0.796 0.204
#> GSM710834     2  0.4939     0.8747 0.108 0.892
#> GSM710835     2  0.7376     0.7077 0.208 0.792
#> GSM710836     1  0.6973     0.8730 0.812 0.188
#> GSM710837     1  0.6973     0.8730 0.812 0.188
#> GSM710862     1  0.6973     0.8730 0.812 0.188
#> GSM710863     1  0.6438     0.8706 0.836 0.164
#> GSM710865     1  0.6438     0.8706 0.836 0.164
#> GSM710867     1  0.1414     0.8068 0.980 0.020
#> GSM710869     1  0.6973     0.8730 0.812 0.188
#> GSM710871     1  0.0000     0.7952 1.000 0.000
#> GSM710873     1  0.6973     0.8730 0.812 0.188
#> GSM710802     1  0.6973     0.8730 0.812 0.188
#> GSM710803     1  0.0000     0.7952 1.000 0.000
#> GSM710804     2  0.2603     0.9393 0.044 0.956
#> GSM710805     2  0.2236     0.9441 0.036 0.964
#> GSM710806     1  0.9954    -0.0507 0.540 0.460
#> GSM710807     1  0.6973     0.8730 0.812 0.188
#> GSM710808     1  0.3114     0.8027 0.944 0.056
#> GSM710809     1  0.8763     0.7646 0.704 0.296
#> GSM710810     1  0.7056     0.8709 0.808 0.192
#> GSM710811     1  0.0000     0.7952 1.000 0.000
#> GSM710812     1  0.6438     0.8706 0.836 0.164
#> GSM710821     1  0.8144     0.8195 0.748 0.252
#> GSM710822     1  0.6973     0.8730 0.812 0.188
#> GSM710823     1  0.6973     0.8730 0.812 0.188
#> GSM710824     2  0.6973     0.7490 0.188 0.812
#> GSM710825     1  0.9522     0.6321 0.628 0.372
#> GSM710826     1  0.0376     0.7956 0.996 0.004
#> GSM710827     1  0.6438     0.8706 0.836 0.164

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> MAD:mclust 53    6.54e-06         0.482 2
#> MAD:mclust 50    2.14e-06         0.807 3
#> MAD:mclust 49    1.85e-05         0.879 4
#> MAD:mclust 40    1.04e-06         0.770 5
#> MAD:mclust 35    5.80e-05         0.752 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "NMF"]
# you can also extract it by
# res = res_list["MAD:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.627           0.838       0.931         0.5066 0.493   0.493
#> 3 3 0.616           0.729       0.860         0.3045 0.746   0.528
#> 4 4 0.522           0.517       0.739         0.1206 0.843   0.580
#> 5 5 0.525           0.355       0.650         0.0710 0.828   0.462
#> 6 6 0.611           0.459       0.686         0.0534 0.853   0.421

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000      0.918 0.000 1.000
#> GSM710829     2  0.0000      0.918 0.000 1.000
#> GSM710839     2  0.0000      0.918 0.000 1.000
#> GSM710841     2  0.0000      0.918 0.000 1.000
#> GSM710843     2  0.0000      0.918 0.000 1.000
#> GSM710845     2  0.0000      0.918 0.000 1.000
#> GSM710846     2  0.0000      0.918 0.000 1.000
#> GSM710849     2  0.4431      0.849 0.092 0.908
#> GSM710853     2  0.0000      0.918 0.000 1.000
#> GSM710855     1  0.9661      0.385 0.608 0.392
#> GSM710858     2  0.0000      0.918 0.000 1.000
#> GSM710860     2  0.0000      0.918 0.000 1.000
#> GSM710801     2  0.0000      0.918 0.000 1.000
#> GSM710813     2  0.0000      0.918 0.000 1.000
#> GSM710814     2  0.0000      0.918 0.000 1.000
#> GSM710815     2  0.0000      0.918 0.000 1.000
#> GSM710816     2  0.0000      0.918 0.000 1.000
#> GSM710817     2  0.7745      0.711 0.228 0.772
#> GSM710818     2  0.9460      0.376 0.364 0.636
#> GSM710819     1  0.7219      0.744 0.800 0.200
#> GSM710820     2  0.0000      0.918 0.000 1.000
#> GSM710830     1  0.0000      0.919 1.000 0.000
#> GSM710831     2  0.6343      0.768 0.160 0.840
#> GSM710832     1  0.0000      0.919 1.000 0.000
#> GSM710833     1  0.7602      0.719 0.780 0.220
#> GSM710834     2  0.7219      0.738 0.200 0.800
#> GSM710835     1  0.2948      0.884 0.948 0.052
#> GSM710836     1  0.0376      0.917 0.996 0.004
#> GSM710837     1  0.0000      0.919 1.000 0.000
#> GSM710862     1  0.6148      0.797 0.848 0.152
#> GSM710863     1  0.0000      0.919 1.000 0.000
#> GSM710865     1  0.0000      0.919 1.000 0.000
#> GSM710867     1  0.0000      0.919 1.000 0.000
#> GSM710869     1  0.0376      0.917 0.996 0.004
#> GSM710871     1  0.0000      0.919 1.000 0.000
#> GSM710873     1  0.0000      0.919 1.000 0.000
#> GSM710802     1  0.0000      0.919 1.000 0.000
#> GSM710803     1  0.0000      0.919 1.000 0.000
#> GSM710804     2  0.8608      0.627 0.284 0.716
#> GSM710805     2  0.0000      0.918 0.000 1.000
#> GSM710806     1  0.7674      0.677 0.776 0.224
#> GSM710807     1  0.0000      0.919 1.000 0.000
#> GSM710808     1  0.4690      0.840 0.900 0.100
#> GSM710809     1  0.0672      0.915 0.992 0.008
#> GSM710810     1  0.0938      0.912 0.988 0.012
#> GSM710811     1  0.0000      0.919 1.000 0.000
#> GSM710812     1  0.0000      0.919 1.000 0.000
#> GSM710821     1  0.9795      0.222 0.584 0.416
#> GSM710822     1  0.0000      0.919 1.000 0.000
#> GSM710823     1  0.7219      0.744 0.800 0.200
#> GSM710824     2  0.0000      0.918 0.000 1.000
#> GSM710825     2  0.9833      0.301 0.424 0.576
#> GSM710826     1  0.0000      0.919 1.000 0.000
#> GSM710827     1  0.0000      0.919 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n specimen(p) individual(p) k
#> MAD:NMF 50    3.04e-07         0.759 2
#> MAD:NMF 45    1.75e-06         0.888 3
#> MAD:NMF 35    6.03e-05         0.847 4
#> MAD:NMF 18    6.11e-04         0.710 5
#> MAD:NMF 28    1.98e-03         0.686 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:hclust**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "hclust"]
# you can also extract it by
# res = res_list["ATC:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.987           0.942       0.973         0.4933 0.508   0.508
#> 3 3 0.628           0.629       0.682         0.2875 0.858   0.729
#> 4 4 0.683           0.837       0.830         0.1516 0.681   0.353
#> 5 5 0.741           0.728       0.834         0.0870 0.916   0.689
#> 6 6 0.742           0.719       0.818         0.0275 0.918   0.632

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2   0.000      0.964 0.000 1.000
#> GSM710829     2   0.000      0.964 0.000 1.000
#> GSM710839     2   0.000      0.964 0.000 1.000
#> GSM710841     2   0.000      0.964 0.000 1.000
#> GSM710843     2   0.000      0.964 0.000 1.000
#> GSM710845     2   0.000      0.964 0.000 1.000
#> GSM710846     2   0.000      0.964 0.000 1.000
#> GSM710849     2   0.000      0.964 0.000 1.000
#> GSM710853     2   0.000      0.964 0.000 1.000
#> GSM710855     1   0.000      0.983 1.000 0.000
#> GSM710858     2   0.000      0.964 0.000 1.000
#> GSM710860     2   0.000      0.964 0.000 1.000
#> GSM710801     2   0.000      0.964 0.000 1.000
#> GSM710813     2   0.000      0.964 0.000 1.000
#> GSM710814     2   0.000      0.964 0.000 1.000
#> GSM710815     2   0.000      0.964 0.000 1.000
#> GSM710816     2   0.000      0.964 0.000 1.000
#> GSM710817     2   0.000      0.964 0.000 1.000
#> GSM710818     1   0.204      0.969 0.968 0.032
#> GSM710819     1   0.000      0.983 1.000 0.000
#> GSM710820     2   0.000      0.964 0.000 1.000
#> GSM710830     2   0.358      0.918 0.068 0.932
#> GSM710831     2   0.000      0.964 0.000 1.000
#> GSM710832     1   0.000      0.983 1.000 0.000
#> GSM710833     1   0.118      0.977 0.984 0.016
#> GSM710834     2   0.000      0.964 0.000 1.000
#> GSM710835     2   0.000      0.964 0.000 1.000
#> GSM710836     1   0.000      0.983 1.000 0.000
#> GSM710837     1   0.000      0.983 1.000 0.000
#> GSM710862     1   0.343      0.943 0.936 0.064
#> GSM710863     1   0.000      0.983 1.000 0.000
#> GSM710865     1   0.000      0.983 1.000 0.000
#> GSM710867     1   0.000      0.983 1.000 0.000
#> GSM710869     1   0.000      0.983 1.000 0.000
#> GSM710871     1   0.000      0.983 1.000 0.000
#> GSM710873     1   0.000      0.983 1.000 0.000
#> GSM710802     1   0.358      0.940 0.932 0.068
#> GSM710803     1   0.000      0.983 1.000 0.000
#> GSM710804     2   0.000      0.964 0.000 1.000
#> GSM710805     2   0.000      0.964 0.000 1.000
#> GSM710806     2   0.000      0.964 0.000 1.000
#> GSM710807     1   0.000      0.983 1.000 0.000
#> GSM710808     2   0.260      0.937 0.044 0.956
#> GSM710809     2   0.876      0.596 0.296 0.704
#> GSM710810     2   0.996      0.156 0.464 0.536
#> GSM710811     1   0.118      0.977 0.984 0.016
#> GSM710812     1   0.204      0.969 0.968 0.032
#> GSM710821     2   0.260      0.937 0.044 0.956
#> GSM710822     1   0.278      0.957 0.952 0.048
#> GSM710823     1   0.358      0.940 0.932 0.068
#> GSM710824     2   0.343      0.921 0.064 0.936
#> GSM710825     2   0.260      0.937 0.044 0.956
#> GSM710826     2   0.343      0.921 0.064 0.936
#> GSM710827     1   0.000      0.983 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> ATC:hclust 53    2.95e-03         0.870 2
#> ATC:hclust 38    1.84e-02         0.762 3
#> ATC:hclust 53    2.48e-05         0.718 4
#> ATC:hclust 48    4.23e-04         0.397 5
#> ATC:hclust 46    1.20e-04         0.555 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:kmeans**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "kmeans"]
# you can also extract it by
# res = res_list["ATC:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 1.000           0.984       0.994         0.5059 0.493   0.493
#> 3 3 0.651           0.458       0.716         0.2614 0.945   0.890
#> 4 4 0.613           0.644       0.753         0.1480 0.761   0.483
#> 5 5 0.716           0.698       0.792         0.0719 0.862   0.525
#> 6 6 0.750           0.656       0.778         0.0455 0.980   0.903

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2   0.000      1.000 0.000 1.000
#> GSM710829     2   0.000      1.000 0.000 1.000
#> GSM710839     2   0.000      1.000 0.000 1.000
#> GSM710841     2   0.000      1.000 0.000 1.000
#> GSM710843     2   0.000      1.000 0.000 1.000
#> GSM710845     2   0.000      1.000 0.000 1.000
#> GSM710846     2   0.000      1.000 0.000 1.000
#> GSM710849     2   0.000      1.000 0.000 1.000
#> GSM710853     2   0.000      1.000 0.000 1.000
#> GSM710855     1   0.000      0.986 1.000 0.000
#> GSM710858     2   0.000      1.000 0.000 1.000
#> GSM710860     2   0.000      1.000 0.000 1.000
#> GSM710801     2   0.000      1.000 0.000 1.000
#> GSM710813     2   0.000      1.000 0.000 1.000
#> GSM710814     2   0.000      1.000 0.000 1.000
#> GSM710815     2   0.000      1.000 0.000 1.000
#> GSM710816     2   0.000      1.000 0.000 1.000
#> GSM710817     2   0.000      1.000 0.000 1.000
#> GSM710818     1   0.000      0.986 1.000 0.000
#> GSM710819     1   0.000      0.986 1.000 0.000
#> GSM710820     2   0.000      1.000 0.000 1.000
#> GSM710830     2   0.000      1.000 0.000 1.000
#> GSM710831     2   0.000      1.000 0.000 1.000
#> GSM710832     1   0.000      0.986 1.000 0.000
#> GSM710833     1   0.000      0.986 1.000 0.000
#> GSM710834     2   0.000      1.000 0.000 1.000
#> GSM710835     2   0.000      1.000 0.000 1.000
#> GSM710836     1   0.000      0.986 1.000 0.000
#> GSM710837     1   0.000      0.986 1.000 0.000
#> GSM710862     1   0.000      0.986 1.000 0.000
#> GSM710863     1   0.000      0.986 1.000 0.000
#> GSM710865     1   0.000      0.986 1.000 0.000
#> GSM710867     1   0.000      0.986 1.000 0.000
#> GSM710869     1   0.000      0.986 1.000 0.000
#> GSM710871     1   0.000      0.986 1.000 0.000
#> GSM710873     1   0.000      0.986 1.000 0.000
#> GSM710802     1   0.000      0.986 1.000 0.000
#> GSM710803     1   0.000      0.986 1.000 0.000
#> GSM710804     2   0.000      1.000 0.000 1.000
#> GSM710805     2   0.000      1.000 0.000 1.000
#> GSM710806     2   0.000      1.000 0.000 1.000
#> GSM710807     1   0.000      0.986 1.000 0.000
#> GSM710808     2   0.000      1.000 0.000 1.000
#> GSM710809     1   0.000      0.986 1.000 0.000
#> GSM710810     1   0.000      0.986 1.000 0.000
#> GSM710811     1   0.000      0.986 1.000 0.000
#> GSM710812     1   0.000      0.986 1.000 0.000
#> GSM710821     2   0.000      1.000 0.000 1.000
#> GSM710822     1   0.000      0.986 1.000 0.000
#> GSM710823     1   0.000      0.986 1.000 0.000
#> GSM710824     2   0.000      1.000 0.000 1.000
#> GSM710825     2   0.000      1.000 0.000 1.000
#> GSM710826     1   0.929      0.476 0.656 0.344
#> GSM710827     1   0.000      0.986 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> ATC:kmeans 53    6.98e-04         1.000 2
#> ATC:kmeans 39    3.23e-05         0.803 3
#> ATC:kmeans 43    2.32e-02         0.156 4
#> ATC:kmeans 45    3.17e-04         0.858 5
#> ATC:kmeans 45    3.45e-04         0.366 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:skmeans**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "skmeans"]
# you can also extract it by
# res = res_list["ATC:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 1.000           1.000       1.000         0.5071 0.493   0.493
#> 3 3 1.000           0.989       0.993         0.2909 0.853   0.703
#> 4 4 0.865           0.788       0.894         0.0931 0.968   0.907
#> 5 5 0.784           0.623       0.847         0.0489 0.938   0.813
#> 6 6 0.779           0.631       0.807         0.0333 0.973   0.909

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3
#> attr(,"optional")
#> [1] 2

There is also optional best \(k\) = 2 that is worth to check.

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette p1 p2
#> GSM710828     2       0          1  0  1
#> GSM710829     2       0          1  0  1
#> GSM710839     2       0          1  0  1
#> GSM710841     2       0          1  0  1
#> GSM710843     2       0          1  0  1
#> GSM710845     2       0          1  0  1
#> GSM710846     2       0          1  0  1
#> GSM710849     2       0          1  0  1
#> GSM710853     2       0          1  0  1
#> GSM710855     1       0          1  1  0
#> GSM710858     2       0          1  0  1
#> GSM710860     2       0          1  0  1
#> GSM710801     2       0          1  0  1
#> GSM710813     2       0          1  0  1
#> GSM710814     2       0          1  0  1
#> GSM710815     2       0          1  0  1
#> GSM710816     2       0          1  0  1
#> GSM710817     2       0          1  0  1
#> GSM710818     1       0          1  1  0
#> GSM710819     1       0          1  1  0
#> GSM710820     2       0          1  0  1
#> GSM710830     2       0          1  0  1
#> GSM710831     2       0          1  0  1
#> GSM710832     1       0          1  1  0
#> GSM710833     1       0          1  1  0
#> GSM710834     2       0          1  0  1
#> GSM710835     2       0          1  0  1
#> GSM710836     1       0          1  1  0
#> GSM710837     1       0          1  1  0
#> GSM710862     1       0          1  1  0
#> GSM710863     1       0          1  1  0
#> GSM710865     1       0          1  1  0
#> GSM710867     1       0          1  1  0
#> GSM710869     1       0          1  1  0
#> GSM710871     1       0          1  1  0
#> GSM710873     1       0          1  1  0
#> GSM710802     1       0          1  1  0
#> GSM710803     1       0          1  1  0
#> GSM710804     2       0          1  0  1
#> GSM710805     2       0          1  0  1
#> GSM710806     2       0          1  0  1
#> GSM710807     1       0          1  1  0
#> GSM710808     2       0          1  0  1
#> GSM710809     1       0          1  1  0
#> GSM710810     1       0          1  1  0
#> GSM710811     1       0          1  1  0
#> GSM710812     1       0          1  1  0
#> GSM710821     2       0          1  0  1
#> GSM710822     1       0          1  1  0
#> GSM710823     1       0          1  1  0
#> GSM710824     2       0          1  0  1
#> GSM710825     2       0          1  0  1
#> GSM710826     1       0          1  1  0
#> GSM710827     1       0          1  1  0

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>              n specimen(p) individual(p) k
#> ATC:skmeans 54    0.000495         1.000 2
#> ATC:skmeans 54    0.000732         0.822 3
#> ATC:skmeans 45    0.009476         0.672 4
#> ATC:skmeans 34    0.018375         0.713 5
#> ATC:skmeans 36    0.003927         0.659 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:pam*

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "pam"]
# you can also extract it by
# res = res_list["ATC:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 5.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 1.000           0.983       0.993         0.5090 0.491   0.491
#> 3 3 1.000           0.991       0.996         0.2823 0.840   0.680
#> 4 4 0.831           0.912       0.927         0.1190 0.895   0.714
#> 5 5 0.934           0.878       0.952         0.1054 0.909   0.675
#> 6 6 0.876           0.775       0.896         0.0316 0.939   0.708

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 5
#> attr(,"optional")
#> [1] 2 3

There is also optional best \(k\) = 2 3 that is worth to check.

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2   0.000      0.992 0.000 1.000
#> GSM710829     2   0.000      0.992 0.000 1.000
#> GSM710839     2   0.000      0.992 0.000 1.000
#> GSM710841     2   0.000      0.992 0.000 1.000
#> GSM710843     2   0.000      0.992 0.000 1.000
#> GSM710845     2   0.000      0.992 0.000 1.000
#> GSM710846     2   0.000      0.992 0.000 1.000
#> GSM710849     2   0.000      0.992 0.000 1.000
#> GSM710853     2   0.000      0.992 0.000 1.000
#> GSM710855     1   0.000      0.992 1.000 0.000
#> GSM710858     2   0.000      0.992 0.000 1.000
#> GSM710860     2   0.000      0.992 0.000 1.000
#> GSM710801     2   0.000      0.992 0.000 1.000
#> GSM710813     2   0.000      0.992 0.000 1.000
#> GSM710814     2   0.000      0.992 0.000 1.000
#> GSM710815     2   0.000      0.992 0.000 1.000
#> GSM710816     2   0.000      0.992 0.000 1.000
#> GSM710817     2   0.000      0.992 0.000 1.000
#> GSM710818     1   0.000      0.992 1.000 0.000
#> GSM710819     1   0.000      0.992 1.000 0.000
#> GSM710820     2   0.000      0.992 0.000 1.000
#> GSM710830     1   0.722      0.746 0.800 0.200
#> GSM710831     2   0.000      0.992 0.000 1.000
#> GSM710832     1   0.000      0.992 1.000 0.000
#> GSM710833     1   0.000      0.992 1.000 0.000
#> GSM710834     2   0.000      0.992 0.000 1.000
#> GSM710835     2   0.000      0.992 0.000 1.000
#> GSM710836     1   0.000      0.992 1.000 0.000
#> GSM710837     1   0.000      0.992 1.000 0.000
#> GSM710862     1   0.000      0.992 1.000 0.000
#> GSM710863     1   0.000      0.992 1.000 0.000
#> GSM710865     1   0.000      0.992 1.000 0.000
#> GSM710867     1   0.000      0.992 1.000 0.000
#> GSM710869     1   0.000      0.992 1.000 0.000
#> GSM710871     1   0.000      0.992 1.000 0.000
#> GSM710873     1   0.000      0.992 1.000 0.000
#> GSM710802     1   0.000      0.992 1.000 0.000
#> GSM710803     1   0.000      0.992 1.000 0.000
#> GSM710804     2   0.000      0.992 0.000 1.000
#> GSM710805     2   0.000      0.992 0.000 1.000
#> GSM710806     2   0.000      0.992 0.000 1.000
#> GSM710807     1   0.000      0.992 1.000 0.000
#> GSM710808     2   0.000      0.992 0.000 1.000
#> GSM710809     1   0.000      0.992 1.000 0.000
#> GSM710810     1   0.000      0.992 1.000 0.000
#> GSM710811     1   0.000      0.992 1.000 0.000
#> GSM710812     1   0.000      0.992 1.000 0.000
#> GSM710821     2   0.000      0.992 0.000 1.000
#> GSM710822     1   0.000      0.992 1.000 0.000
#> GSM710823     1   0.000      0.992 1.000 0.000
#> GSM710824     2   0.730      0.739 0.204 0.796
#> GSM710825     2   0.000      0.992 0.000 1.000
#> GSM710826     1   0.000      0.992 1.000 0.000
#> GSM710827     1   0.000      0.992 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n specimen(p) individual(p) k
#> ATC:pam 54    0.000221         1.000 2
#> ATC:pam 54    0.000282         0.943 3
#> ATC:pam 54    0.000139         0.711 4
#> ATC:pam 51    0.001033         0.536 5
#> ATC:pam 45    0.005440         0.308 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:mclust*

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "mclust"]
# you can also extract it by
# res = res_list["ATC:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.576           0.933       0.953         0.4922 0.491   0.491
#> 3 3 0.943           0.944       0.976         0.3278 0.836   0.672
#> 4 4 0.889           0.853       0.926         0.0916 0.881   0.676
#> 5 5 0.817           0.834       0.889         0.0544 0.973   0.902
#> 6 6 0.762           0.690       0.816         0.0600 0.910   0.675

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.5629      0.923 0.132 0.868
#> GSM710829     2  0.5629      0.923 0.132 0.868
#> GSM710839     2  0.5629      0.923 0.132 0.868
#> GSM710841     2  0.0000      0.918 0.000 1.000
#> GSM710843     2  0.5629      0.923 0.132 0.868
#> GSM710845     2  0.5629      0.923 0.132 0.868
#> GSM710846     2  0.0000      0.918 0.000 1.000
#> GSM710849     2  0.0000      0.918 0.000 1.000
#> GSM710853     2  0.0672      0.916 0.008 0.992
#> GSM710855     1  0.0000      0.976 1.000 0.000
#> GSM710858     2  0.0000      0.918 0.000 1.000
#> GSM710860     2  0.0000      0.918 0.000 1.000
#> GSM710801     2  0.5629      0.923 0.132 0.868
#> GSM710813     2  0.5629      0.923 0.132 0.868
#> GSM710814     2  0.5629      0.923 0.132 0.868
#> GSM710815     2  0.5629      0.923 0.132 0.868
#> GSM710816     2  0.5629      0.923 0.132 0.868
#> GSM710817     2  0.0000      0.918 0.000 1.000
#> GSM710818     1  0.0000      0.976 1.000 0.000
#> GSM710819     1  0.0000      0.976 1.000 0.000
#> GSM710820     2  0.0000      0.918 0.000 1.000
#> GSM710830     1  0.9248      0.401 0.660 0.340
#> GSM710831     2  0.0000      0.918 0.000 1.000
#> GSM710832     1  0.0000      0.976 1.000 0.000
#> GSM710833     1  0.0000      0.976 1.000 0.000
#> GSM710834     2  0.5629      0.923 0.132 0.868
#> GSM710835     2  0.0000      0.918 0.000 1.000
#> GSM710836     1  0.0000      0.976 1.000 0.000
#> GSM710837     1  0.0000      0.976 1.000 0.000
#> GSM710862     1  0.0000      0.976 1.000 0.000
#> GSM710863     1  0.0000      0.976 1.000 0.000
#> GSM710865     1  0.0000      0.976 1.000 0.000
#> GSM710867     1  0.0000      0.976 1.000 0.000
#> GSM710869     1  0.0000      0.976 1.000 0.000
#> GSM710871     1  0.0000      0.976 1.000 0.000
#> GSM710873     1  0.0000      0.976 1.000 0.000
#> GSM710802     1  0.0000      0.976 1.000 0.000
#> GSM710803     1  0.0000      0.976 1.000 0.000
#> GSM710804     2  0.0000      0.918 0.000 1.000
#> GSM710805     2  0.0000      0.918 0.000 1.000
#> GSM710806     2  0.0000      0.918 0.000 1.000
#> GSM710807     1  0.0000      0.976 1.000 0.000
#> GSM710808     2  0.5629      0.923 0.132 0.868
#> GSM710809     1  0.4815      0.866 0.896 0.104
#> GSM710810     1  0.0000      0.976 1.000 0.000
#> GSM710811     1  0.0000      0.976 1.000 0.000
#> GSM710812     1  0.0000      0.976 1.000 0.000
#> GSM710821     2  0.5842      0.917 0.140 0.860
#> GSM710822     1  0.0000      0.976 1.000 0.000
#> GSM710823     1  0.0000      0.976 1.000 0.000
#> GSM710824     2  0.5946      0.914 0.144 0.856
#> GSM710825     2  0.5946      0.914 0.144 0.856
#> GSM710826     1  0.4161      0.886 0.916 0.084
#> GSM710827     1  0.0000      0.976 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n specimen(p) individual(p) k
#> ATC:mclust 53    0.000314         1.000 2
#> ATC:mclust 53    0.000508         0.677 3
#> ATC:mclust 48    0.000235         0.809 4
#> ATC:mclust 51    0.000276         0.667 5
#> ATC:mclust 47    0.000940         0.332 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:NMF**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "NMF"]
# you can also extract it by
# res = res_list["ATC:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 54 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.959           0.946       0.976         0.4933 0.502   0.502
#> 3 3 0.667           0.811       0.905         0.2673 0.811   0.643
#> 4 4 0.612           0.611       0.798         0.1498 0.803   0.522
#> 5 5 0.566           0.534       0.727         0.0776 0.828   0.462
#> 6 6 0.563           0.424       0.648         0.0408 0.907   0.630

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM710828     2  0.0000      0.987 0.000 1.000
#> GSM710829     2  0.0000      0.987 0.000 1.000
#> GSM710839     2  0.0000      0.987 0.000 1.000
#> GSM710841     2  0.0000      0.987 0.000 1.000
#> GSM710843     2  0.0000      0.987 0.000 1.000
#> GSM710845     2  0.0000      0.987 0.000 1.000
#> GSM710846     2  0.0000      0.987 0.000 1.000
#> GSM710849     2  0.0000      0.987 0.000 1.000
#> GSM710853     2  0.0000      0.987 0.000 1.000
#> GSM710855     1  0.0000      0.958 1.000 0.000
#> GSM710858     2  0.0000      0.987 0.000 1.000
#> GSM710860     2  0.0000      0.987 0.000 1.000
#> GSM710801     2  0.0000      0.987 0.000 1.000
#> GSM710813     2  0.0000      0.987 0.000 1.000
#> GSM710814     2  0.0000      0.987 0.000 1.000
#> GSM710815     2  0.0000      0.987 0.000 1.000
#> GSM710816     2  0.0000      0.987 0.000 1.000
#> GSM710817     2  0.0000      0.987 0.000 1.000
#> GSM710818     1  0.0672      0.953 0.992 0.008
#> GSM710819     1  0.0000      0.958 1.000 0.000
#> GSM710820     2  0.0000      0.987 0.000 1.000
#> GSM710830     2  0.0938      0.976 0.012 0.988
#> GSM710831     2  0.0000      0.987 0.000 1.000
#> GSM710832     1  0.0000      0.958 1.000 0.000
#> GSM710833     1  0.0000      0.958 1.000 0.000
#> GSM710834     2  0.0000      0.987 0.000 1.000
#> GSM710835     2  0.0000      0.987 0.000 1.000
#> GSM710836     1  0.0000      0.958 1.000 0.000
#> GSM710837     1  0.0000      0.958 1.000 0.000
#> GSM710862     1  0.6887      0.786 0.816 0.184
#> GSM710863     1  0.0000      0.958 1.000 0.000
#> GSM710865     1  0.0000      0.958 1.000 0.000
#> GSM710867     1  0.0000      0.958 1.000 0.000
#> GSM710869     1  0.0000      0.958 1.000 0.000
#> GSM710871     1  0.0000      0.958 1.000 0.000
#> GSM710873     1  0.0000      0.958 1.000 0.000
#> GSM710802     1  0.3733      0.906 0.928 0.072
#> GSM710803     1  0.0000      0.958 1.000 0.000
#> GSM710804     2  0.0000      0.987 0.000 1.000
#> GSM710805     2  0.0000      0.987 0.000 1.000
#> GSM710806     2  0.0000      0.987 0.000 1.000
#> GSM710807     1  0.0000      0.958 1.000 0.000
#> GSM710808     2  0.0000      0.987 0.000 1.000
#> GSM710809     2  0.8443      0.603 0.272 0.728
#> GSM710810     1  0.9710      0.367 0.600 0.400
#> GSM710811     1  0.0000      0.958 1.000 0.000
#> GSM710812     1  0.2948      0.922 0.948 0.052
#> GSM710821     2  0.0000      0.987 0.000 1.000
#> GSM710822     1  0.0000      0.958 1.000 0.000
#> GSM710823     1  0.6973      0.781 0.812 0.188
#> GSM710824     2  0.0000      0.987 0.000 1.000
#> GSM710825     2  0.0000      0.987 0.000 1.000
#> GSM710826     2  0.4298      0.893 0.088 0.912
#> GSM710827     1  0.0000      0.958 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n specimen(p) individual(p) k
#> ATC:NMF 53     0.00295         0.870 2
#> ATC:NMF 51     0.00799         0.786 3
#> ATC:NMF 39     0.01872         0.138 4
#> ATC:NMF 34     0.01360         0.307 5
#> ATC:NMF 25     0.08583         0.192 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.

Session info

sessionInfo()
#> R version 3.6.0 (2019-04-26)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: CentOS Linux 7 (Core)
#> 
#> Matrix products: default
#> BLAS:   /usr/lib64/libblas.so.3.4.2
#> LAPACK: /usr/lib64/liblapack.so.3.4.2
#> 
#> locale:
#>  [1] LC_CTYPE=en_GB.UTF-8       LC_NUMERIC=C               LC_TIME=en_GB.UTF-8       
#>  [4] LC_COLLATE=en_GB.UTF-8     LC_MONETARY=en_GB.UTF-8    LC_MESSAGES=en_GB.UTF-8   
#>  [7] LC_PAPER=en_GB.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
#> [10] LC_TELEPHONE=C             LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C       
#> 
#> attached base packages:
#> [1] grid      stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] genefilter_1.66.0    ComplexHeatmap_2.3.1 markdown_1.1         knitr_1.26          
#> [5] GetoptLong_0.1.7     cola_1.3.2          
#> 
#> loaded via a namespace (and not attached):
#>  [1] circlize_0.4.8       shape_1.4.4          xfun_0.11            slam_0.1-46         
#>  [5] lattice_0.20-38      splines_3.6.0        colorspace_1.4-1     vctrs_0.2.0         
#>  [9] stats4_3.6.0         blob_1.2.0           XML_3.98-1.20        survival_2.44-1.1   
#> [13] rlang_0.4.2          pillar_1.4.2         DBI_1.0.0            BiocGenerics_0.30.0 
#> [17] bit64_0.9-7          RColorBrewer_1.1-2   matrixStats_0.55.0   stringr_1.4.0       
#> [21] GlobalOptions_0.1.1  evaluate_0.14        memoise_1.1.0        Biobase_2.44.0      
#> [25] IRanges_2.18.3       parallel_3.6.0       AnnotationDbi_1.46.1 highr_0.8           
#> [29] Rcpp_1.0.3           xtable_1.8-4         backports_1.1.5      S4Vectors_0.22.1    
#> [33] annotate_1.62.0      skmeans_0.2-11       bit_1.1-14           microbenchmark_1.4-7
#> [37] brew_1.0-6           impute_1.58.0        rjson_0.2.20         png_0.1-7           
#> [41] digest_0.6.23        stringi_1.4.3        polyclip_1.10-0      clue_0.3-57         
#> [45] tools_3.6.0          bitops_1.0-6         magrittr_1.5         eulerr_6.0.0        
#> [49] RCurl_1.95-4.12      RSQLite_2.1.4        tibble_2.1.3         cluster_2.1.0       
#> [53] crayon_1.3.4         pkgconfig_2.0.3      zeallot_0.1.0        Matrix_1.2-17       
#> [57] xml2_1.2.2           httr_1.4.1           R6_2.4.1             mclust_5.4.5        
#> [61] compiler_3.6.0