cola Report for GDS5217

Date: 2019-12-25 22:04:04 CET, cola version: 1.3.2


Summary

All available functions which can be applied to this res_list object:

res_list
#> A 'ConsensusPartitionList' object with 24 methods.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows are extracted by 'SD, CV, MAD, ATC' methods.
#>   Subgroups are detected by 'hclust, kmeans, skmeans, pam, mclust, NMF' method.
#>   Number of partitions are tried for k = 2, 3, 4, 5, 6.
#>   Performed in total 30000 partitions by row resampling.
#> 
#> Following methods can be applied to this 'ConsensusPartitionList' object:
#>  [1] "cola_report"           "collect_classes"       "collect_plots"         "collect_stats"        
#>  [5] "colnames"              "functional_enrichment" "get_anno_col"          "get_anno"             
#>  [9] "get_classes"           "get_matrix"            "get_membership"        "get_stats"            
#> [13] "is_best_k"             "is_stable_k"           "ncol"                  "nrow"                 
#> [17] "rownames"              "show"                  "suggest_best_k"        "test_to_known_factors"
#> [21] "top_rows_heatmap"      "top_rows_overlap"     
#> 
#> You can get result for a single method by, e.g. object["SD", "hclust"] or object["SD:hclust"]
#> or a subset of methods by object[c("SD", "CV")], c("hclust", "kmeans")]

The call of run_all_consensus_partition_methods() was:

#> run_all_consensus_partition_methods(data = mat, mc.cores = 4, anno = anno)

Dimension of the input matrix:

mat = get_matrix(res_list)
dim(mat)
#> [1] 51941    70

Density distribution

The density distribution for each sample is visualized as in one column in the following heatmap. The clustering is based on the distance which is the Kolmogorov-Smirnov statistic between two distributions.

library(ComplexHeatmap)
densityHeatmap(mat, top_annotation = HeatmapAnnotation(df = get_anno(res_list), 
    col = get_anno_col(res_list)), ylab = "value", cluster_columns = TRUE, show_column_names = FALSE,
    mc.cores = 4)

plot of chunk density-heatmap

Suggest the best k

Folowing table shows the best k (number of partitions) for each combination of top-value methods and partition methods. Clicking on the method name in the table goes to the section for a single combination of methods.

The cola vignette explains the definition of the metrics used for determining the best number of partitions.

suggest_best_k(res_list)
The best k 1-PAC Mean silhouette Concordance
ATC:pam 3 1.000 0.970 0.987 **
MAD:kmeans 2 0.969 0.941 0.973 **
ATC:mclust 2 0.963 0.940 0.968 **
CV:kmeans 2 0.940 0.940 0.974 *
SD:kmeans 2 0.885 0.934 0.966
SD:skmeans 2 0.879 0.911 0.963
MAD:NMF 2 0.798 0.901 0.956
CV:NMF 2 0.791 0.854 0.941
MAD:pam 2 0.780 0.880 0.947
ATC:NMF 2 0.778 0.912 0.959
MAD:skmeans 2 0.741 0.911 0.958
CV:skmeans 2 0.691 0.864 0.941
SD:pam 2 0.691 0.857 0.936
ATC:skmeans 2 0.691 0.928 0.959
SD:NMF 2 0.597 0.812 0.920
ATC:hclust 5 0.517 0.739 0.799
ATC:kmeans 2 0.484 0.852 0.908
SD:mclust 2 0.448 0.704 0.822
MAD:hclust 2 0.426 0.757 0.880
CV:pam 2 0.418 0.787 0.895
SD:hclust 2 0.394 0.746 0.883
CV:hclust 2 0.352 0.715 0.863
MAD:mclust 2 0.351 0.701 0.798
CV:mclust 3 0.333 0.665 0.715

**: 1-PAC > 0.95, *: 1-PAC > 0.9

CDF of consensus matrices

Cumulative distribution function curves of consensus matrix for all methods.

collect_plots(res_list, fun = plot_ecdf)

plot of chunk collect-plots

Consensus heatmap

Consensus heatmaps for all methods. (What is a consensus heatmap?)

collect_plots(res_list, k = 2, fun = consensus_heatmap, mc.cores = 4)

plot of chunk tab-collect-consensus-heatmap-1

Membership heatmap

Membership heatmaps for all methods. (What is a membership heatmap?)

collect_plots(res_list, k = 2, fun = membership_heatmap, mc.cores = 4)

plot of chunk tab-collect-membership-heatmap-1

Signature heatmap

Signature heatmaps for all methods. (What is a signature heatmap?)

Note in following heatmaps, rows are scaled.

collect_plots(res_list, k = 2, fun = get_signatures, mc.cores = 4)

plot of chunk tab-collect-get-signatures-1

Statistics table

The statistics used for measuring the stability of consensus partitioning. (How are they defined?)

get_stats(res_list, k = 2)
#>             k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> SD:NMF      2 0.597           0.812       0.920          0.498 0.496   0.496
#> CV:NMF      2 0.791           0.854       0.941          0.502 0.493   0.493
#> MAD:NMF     2 0.798           0.901       0.956          0.502 0.496   0.496
#> ATC:NMF     2 0.778           0.912       0.959          0.480 0.519   0.519
#> SD:skmeans  2 0.879           0.911       0.963          0.506 0.494   0.494
#> CV:skmeans  2 0.691           0.864       0.941          0.506 0.494   0.494
#> MAD:skmeans 2 0.741           0.911       0.958          0.505 0.496   0.496
#> ATC:skmeans 2 0.691           0.928       0.959          0.507 0.493   0.493
#> SD:mclust   2 0.448           0.704       0.822          0.484 0.499   0.499
#> CV:mclust   2 0.723           0.839       0.920          0.382 0.627   0.627
#> MAD:mclust  2 0.351           0.701       0.798          0.462 0.499   0.499
#> ATC:mclust  2 0.963           0.940       0.968          0.294 0.731   0.731
#> SD:kmeans   2 0.885           0.934       0.966          0.493 0.508   0.508
#> CV:kmeans   2 0.940           0.940       0.974          0.493 0.508   0.508
#> MAD:kmeans  2 0.969           0.941       0.973          0.494 0.503   0.503
#> ATC:kmeans  2 0.484           0.852       0.908          0.490 0.503   0.503
#> SD:pam      2 0.691           0.857       0.936          0.502 0.493   0.493
#> CV:pam      2 0.418           0.787       0.895          0.484 0.499   0.499
#> MAD:pam     2 0.780           0.880       0.947          0.504 0.496   0.496
#> ATC:pam     2 0.627           0.889       0.922          0.481 0.496   0.496
#> SD:hclust   2 0.394           0.746       0.883          0.457 0.526   0.526
#> CV:hclust   2 0.352           0.715       0.863          0.476 0.508   0.508
#> MAD:hclust  2 0.426           0.757       0.880          0.463 0.508   0.508
#> ATC:hclust  2 0.573           0.869       0.896          0.270 0.658   0.658

Following heatmap plots the partition for each combination of methods and the lightness correspond to the silhouette scores for samples in each method. On top the consensus subgroup is inferred from all methods by taking the mean silhouette scores as weight.

collect_stats(res_list, k = 2)

plot of chunk tab-collect-stats-from-consensus-partition-list-1

Partition from all methods

Collect partitions from all methods:

collect_classes(res_list, k = 2)

plot of chunk tab-collect-classes-from-consensus-partition-list-1

Top rows overlap

Overlap of top rows from different top-row methods:

top_rows_overlap(res_list, top_n = 1000, method = "euler")

plot of chunk tab-top-rows-overlap-by-euler-1

Also visualize the correspondance of rankings between different top-row methods:

top_rows_overlap(res_list, top_n = 1000, method = "correspondance")

plot of chunk tab-top-rows-overlap-by-correspondance-1

Heatmaps of the top rows:

top_rows_heatmap(res_list, top_n = 1000)

plot of chunk tab-top-rows-heatmap-1

Test to known annotations

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res_list, k = 2)
#>              n  age(p) time(p) tissue(p) individual(p) k
#> SD:NMF      62 0.10821 0.05338  2.74e-01      0.041624 2
#> CV:NMF      64 0.02512 0.04233  1.00e+00      0.015054 2
#> MAD:NMF     66 0.01697 0.03921  1.00e+00      0.008133 2
#> ATC:NMF     69 0.00101 0.57721  6.28e-01      0.000073 2
#> SD:skmeans  68 0.01545 0.06584  1.00e+00      0.016817 2
#> CV:skmeans  67 0.01237 0.01686  1.00e+00      0.011921 2
#> MAD:skmeans 69 0.01531 0.01741  1.00e+00      0.012268 2
#> ATC:skmeans 70 0.00858 0.57771  8.21e-01      0.000987 2
#> SD:mclust   70 0.86699 0.72499  7.21e-13      0.774877 2
#> CV:mclust   64 0.08335 0.09536  1.00e+00      0.096889 2
#> MAD:mclust  62 1.00000 0.40661  6.81e-12      0.618463 2
#> ATC:mclust  69 0.15720 0.01244  8.75e-01      0.043750 2
#> SD:kmeans   69 0.03314 0.05068  1.00e+00      0.008091 2
#> CV:kmeans   68 0.04682 0.06593  1.00e+00      0.009739 2
#> MAD:kmeans  68 0.02313 0.03978  1.00e+00      0.008004 2
#> ATC:kmeans  69 0.00375 0.77857  5.75e-01      0.000334 2
#> SD:pam      67 0.03174 0.00470  8.81e-01      0.020636 2
#> CV:pam      65 0.07162 0.00214  8.32e-01      0.102867 2
#> MAD:pam     67 0.00359 0.01601  5.51e-01      0.016246 2
#> ATC:pam     70 0.00236 0.91333  2.46e-01      0.000845 2
#> SD:hclust   62 0.02352 0.03902  5.99e-01      0.002946 2
#> CV:hclust   59 0.03182 0.01766  1.00e+00      0.015498 2
#> MAD:hclust  61 0.02865 0.00479  1.00e+00      0.008955 2
#> ATC:hclust  69 0.04250 0.32015  7.50e-01      0.020055 2

Results for each method


SD:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "hclust"]
# you can also extract it by
# res = res_list["SD:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.394           0.746       0.883         0.4566 0.526   0.526
#> 3 3 0.306           0.417       0.756         0.2653 0.927   0.866
#> 4 4 0.314           0.414       0.671         0.0996 0.939   0.876
#> 5 5 0.359           0.476       0.675         0.0956 0.762   0.492
#> 6 6 0.410           0.492       0.703         0.0612 0.938   0.779

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000     0.8630 1.000 0.000
#> GSM701769     1  0.0000     0.8630 1.000 0.000
#> GSM701768     1  0.2043     0.8631 0.968 0.032
#> GSM701767     1  0.9170     0.5386 0.668 0.332
#> GSM701766     1  0.9954     0.1726 0.540 0.460
#> GSM701806     1  0.0000     0.8630 1.000 0.000
#> GSM701805     1  0.0000     0.8630 1.000 0.000
#> GSM701804     1  0.3879     0.8532 0.924 0.076
#> GSM701803     1  0.2043     0.8625 0.968 0.032
#> GSM701775     1  0.5519     0.8155 0.872 0.128
#> GSM701774     1  0.5946     0.8052 0.856 0.144
#> GSM701773     2  0.0938     0.8468 0.012 0.988
#> GSM701772     1  0.9732     0.3510 0.596 0.404
#> GSM701771     1  0.0000     0.8630 1.000 0.000
#> GSM701810     1  0.0000     0.8630 1.000 0.000
#> GSM701809     1  0.5629     0.8185 0.868 0.132
#> GSM701808     1  0.0000     0.8630 1.000 0.000
#> GSM701807     1  0.0000     0.8630 1.000 0.000
#> GSM701780     1  0.3584     0.8553 0.932 0.068
#> GSM701779     2  0.0000     0.8419 0.000 1.000
#> GSM701778     2  0.3879     0.8331 0.076 0.924
#> GSM701777     1  0.9963     0.1553 0.536 0.464
#> GSM701776     1  0.0000     0.8630 1.000 0.000
#> GSM701816     1  0.5294     0.8277 0.880 0.120
#> GSM701815     2  0.1633     0.8463 0.024 0.976
#> GSM701814     2  0.1414     0.8480 0.020 0.980
#> GSM701813     1  0.4022     0.8495 0.920 0.080
#> GSM701812     1  0.4690     0.8398 0.900 0.100
#> GSM701811     1  0.1633     0.8639 0.976 0.024
#> GSM701786     1  0.0000     0.8630 1.000 0.000
#> GSM701785     2  0.5629     0.7996 0.132 0.868
#> GSM701784     2  0.8763     0.5799 0.296 0.704
#> GSM701783     1  0.0000     0.8630 1.000 0.000
#> GSM701782     2  1.0000    -0.0671 0.500 0.500
#> GSM701781     1  0.8861     0.5969 0.696 0.304
#> GSM701822     2  0.0672     0.8459 0.008 0.992
#> GSM701821     2  0.9209     0.5016 0.336 0.664
#> GSM701820     1  0.4939     0.8351 0.892 0.108
#> GSM701819     1  0.2236     0.8614 0.964 0.036
#> GSM701818     1  0.0000     0.8630 1.000 0.000
#> GSM701817     1  0.3879     0.8522 0.924 0.076
#> GSM701790     1  0.0376     0.8634 0.996 0.004
#> GSM701789     1  0.0376     0.8634 0.996 0.004
#> GSM701788     1  0.0000     0.8630 1.000 0.000
#> GSM701787     2  0.9775     0.3045 0.412 0.588
#> GSM701824     1  0.0000     0.8630 1.000 0.000
#> GSM701823     2  0.1633     0.8477 0.024 0.976
#> GSM701791     2  0.0672     0.8458 0.008 0.992
#> GSM701793     1  0.0376     0.8634 0.996 0.004
#> GSM701792     1  0.6531     0.7806 0.832 0.168
#> GSM701825     1  0.0672     0.8623 0.992 0.008
#> GSM701827     2  0.0000     0.8419 0.000 1.000
#> GSM701826     2  0.7299     0.7352 0.204 0.796
#> GSM701797     1  0.9000     0.5705 0.684 0.316
#> GSM701796     1  0.3584     0.8521 0.932 0.068
#> GSM701795     2  0.1633     0.8477 0.024 0.976
#> GSM701794     2  0.0376     0.8442 0.004 0.996
#> GSM701831     2  0.5842     0.7963 0.140 0.860
#> GSM701830     2  0.0672     0.8460 0.008 0.992
#> GSM701829     1  0.9522     0.4447 0.628 0.372
#> GSM701828     2  0.5842     0.7970 0.140 0.860
#> GSM701798     2  0.1843     0.8476 0.028 0.972
#> GSM701802     2  0.9988     0.0436 0.480 0.520
#> GSM701801     1  0.7299     0.7435 0.796 0.204
#> GSM701800     1  0.8763     0.6107 0.704 0.296
#> GSM701799     2  0.0000     0.8419 0.000 1.000
#> GSM701832     2  0.7950     0.6844 0.240 0.760
#> GSM701835     1  0.9552     0.4344 0.624 0.376
#> GSM701834     2  0.5629     0.8022 0.132 0.868
#> GSM701833     2  0.0000     0.8419 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n age(p) time(p) tissue(p) individual(p) k
#> SD:hclust 62 0.0235  0.0390    0.5990       0.00295 2
#> SD:hclust 26 0.3742  0.0353    0.1422       0.09924 3
#> SD:hclust 26 0.3742  0.1716    0.8324       0.05903 4
#> SD:hclust 43 0.1170  0.4419    0.0309       0.01850 5
#> SD:hclust 41 0.4026  0.0506    0.0334       0.02955 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "kmeans"]
# you can also extract it by
# res = res_list["SD:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.885           0.934       0.966         0.4934 0.508   0.508
#> 3 3 0.577           0.596       0.793         0.3439 0.727   0.505
#> 4 4 0.615           0.721       0.816         0.1135 0.834   0.549
#> 5 5 0.632           0.607       0.798         0.0505 0.980   0.919
#> 6 6 0.655           0.664       0.773         0.0380 0.966   0.862

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.961 1.000 0.000
#> GSM701769     1  0.0000      0.961 1.000 0.000
#> GSM701768     1  0.0000      0.961 1.000 0.000
#> GSM701767     1  0.3584      0.918 0.932 0.068
#> GSM701766     2  0.7528      0.736 0.216 0.784
#> GSM701806     1  0.0000      0.961 1.000 0.000
#> GSM701805     1  0.0938      0.960 0.988 0.012
#> GSM701804     1  0.0938      0.960 0.988 0.012
#> GSM701803     1  0.0938      0.960 0.988 0.012
#> GSM701775     1  0.0000      0.961 1.000 0.000
#> GSM701774     1  0.0000      0.961 1.000 0.000
#> GSM701773     2  0.0938      0.969 0.012 0.988
#> GSM701772     1  0.3733      0.914 0.928 0.072
#> GSM701771     1  0.0000      0.961 1.000 0.000
#> GSM701810     1  0.0938      0.960 0.988 0.012
#> GSM701809     1  0.7883      0.725 0.764 0.236
#> GSM701808     1  0.0938      0.960 0.988 0.012
#> GSM701807     1  0.0938      0.960 0.988 0.012
#> GSM701780     1  0.0000      0.961 1.000 0.000
#> GSM701779     2  0.0938      0.969 0.012 0.988
#> GSM701778     2  0.0938      0.969 0.012 0.988
#> GSM701777     2  0.7602      0.729 0.220 0.780
#> GSM701776     1  0.0938      0.960 0.988 0.012
#> GSM701816     1  0.1414      0.958 0.980 0.020
#> GSM701815     2  0.0000      0.969 0.000 1.000
#> GSM701814     2  0.0000      0.969 0.000 1.000
#> GSM701813     1  0.1414      0.958 0.980 0.020
#> GSM701812     1  0.1184      0.960 0.984 0.016
#> GSM701811     1  0.0000      0.961 1.000 0.000
#> GSM701786     1  0.0000      0.961 1.000 0.000
#> GSM701785     2  0.0938      0.969 0.012 0.988
#> GSM701784     2  0.0938      0.969 0.012 0.988
#> GSM701783     1  0.0000      0.961 1.000 0.000
#> GSM701782     2  0.0938      0.969 0.012 0.988
#> GSM701781     1  0.9580      0.397 0.620 0.380
#> GSM701822     2  0.0000      0.969 0.000 1.000
#> GSM701821     2  0.0000      0.969 0.000 1.000
#> GSM701820     1  0.2423      0.947 0.960 0.040
#> GSM701819     1  0.0938      0.960 0.988 0.012
#> GSM701818     1  0.0938      0.960 0.988 0.012
#> GSM701817     1  0.0938      0.960 0.988 0.012
#> GSM701790     1  0.0000      0.961 1.000 0.000
#> GSM701789     1  0.0000      0.961 1.000 0.000
#> GSM701788     1  0.0000      0.961 1.000 0.000
#> GSM701787     2  0.1184      0.967 0.016 0.984
#> GSM701824     1  0.0938      0.960 0.988 0.012
#> GSM701823     2  0.0000      0.969 0.000 1.000
#> GSM701791     2  0.0938      0.969 0.012 0.988
#> GSM701793     1  0.0000      0.961 1.000 0.000
#> GSM701792     1  0.6148      0.830 0.848 0.152
#> GSM701825     1  0.0938      0.960 0.988 0.012
#> GSM701827     2  0.0000      0.969 0.000 1.000
#> GSM701826     2  0.0000      0.969 0.000 1.000
#> GSM701797     1  0.4690      0.887 0.900 0.100
#> GSM701796     1  0.0000      0.961 1.000 0.000
#> GSM701795     2  0.0938      0.969 0.012 0.988
#> GSM701794     2  0.0938      0.969 0.012 0.988
#> GSM701831     2  0.0000      0.969 0.000 1.000
#> GSM701830     2  0.0000      0.969 0.000 1.000
#> GSM701829     2  0.7674      0.704 0.224 0.776
#> GSM701828     2  0.0000      0.969 0.000 1.000
#> GSM701798     2  0.0938      0.969 0.012 0.988
#> GSM701802     2  0.0938      0.969 0.012 0.988
#> GSM701801     1  0.0672      0.958 0.992 0.008
#> GSM701800     1  0.3114      0.927 0.944 0.056
#> GSM701799     2  0.0938      0.969 0.012 0.988
#> GSM701832     2  0.0000      0.969 0.000 1.000
#> GSM701835     1  0.7602      0.735 0.780 0.220
#> GSM701834     2  0.0000      0.969 0.000 1.000
#> GSM701833     2  0.0000      0.969 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n  age(p) time(p) tissue(p) individual(p) k
#> SD:kmeans 69 0.03314  0.0507  1.00e+00       0.00809 2
#> SD:kmeans 50 0.03276  0.0681  3.43e-02       0.08870 3
#> SD:kmeans 65 0.00439  0.0795  4.79e-06       0.00514 4
#> SD:kmeans 51 0.19098  0.0379  1.41e-03       0.01910 5
#> SD:kmeans 57 0.04225  0.0244  1.35e-04       0.06469 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "skmeans"]
# you can also extract it by
# res = res_list["SD:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.879           0.911       0.963         0.5063 0.494   0.494
#> 3 3 0.463           0.589       0.786         0.3043 0.774   0.572
#> 4 4 0.420           0.508       0.682         0.1104 0.873   0.655
#> 5 5 0.437           0.386       0.622         0.0631 0.939   0.794
#> 6 6 0.471           0.342       0.567         0.0391 0.903   0.664

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.955 1.000 0.000
#> GSM701769     1  0.0000      0.955 1.000 0.000
#> GSM701768     1  0.0000      0.955 1.000 0.000
#> GSM701767     1  0.9732      0.333 0.596 0.404
#> GSM701766     2  0.1184      0.953 0.016 0.984
#> GSM701806     1  0.0000      0.955 1.000 0.000
#> GSM701805     1  0.0000      0.955 1.000 0.000
#> GSM701804     1  0.0000      0.955 1.000 0.000
#> GSM701803     1  0.0000      0.955 1.000 0.000
#> GSM701775     1  0.0000      0.955 1.000 0.000
#> GSM701774     1  0.0000      0.955 1.000 0.000
#> GSM701773     2  0.0000      0.965 0.000 1.000
#> GSM701772     1  0.8713      0.595 0.708 0.292
#> GSM701771     1  0.0000      0.955 1.000 0.000
#> GSM701810     1  0.0000      0.955 1.000 0.000
#> GSM701809     2  0.8555      0.615 0.280 0.720
#> GSM701808     1  0.0000      0.955 1.000 0.000
#> GSM701807     1  0.0000      0.955 1.000 0.000
#> GSM701780     1  0.0000      0.955 1.000 0.000
#> GSM701779     2  0.0000      0.965 0.000 1.000
#> GSM701778     2  0.0000      0.965 0.000 1.000
#> GSM701777     2  0.0672      0.959 0.008 0.992
#> GSM701776     1  0.0000      0.955 1.000 0.000
#> GSM701816     1  0.3274      0.907 0.940 0.060
#> GSM701815     2  0.0000      0.965 0.000 1.000
#> GSM701814     2  0.0000      0.965 0.000 1.000
#> GSM701813     1  0.1414      0.941 0.980 0.020
#> GSM701812     1  0.0000      0.955 1.000 0.000
#> GSM701811     1  0.0000      0.955 1.000 0.000
#> GSM701786     1  0.0000      0.955 1.000 0.000
#> GSM701785     2  0.0000      0.965 0.000 1.000
#> GSM701784     2  0.0000      0.965 0.000 1.000
#> GSM701783     1  0.0000      0.955 1.000 0.000
#> GSM701782     2  0.0000      0.965 0.000 1.000
#> GSM701781     2  0.8081      0.674 0.248 0.752
#> GSM701822     2  0.0000      0.965 0.000 1.000
#> GSM701821     2  0.0000      0.965 0.000 1.000
#> GSM701820     1  0.4431      0.876 0.908 0.092
#> GSM701819     1  0.0000      0.955 1.000 0.000
#> GSM701818     1  0.0000      0.955 1.000 0.000
#> GSM701817     1  0.0000      0.955 1.000 0.000
#> GSM701790     1  0.0000      0.955 1.000 0.000
#> GSM701789     1  0.0000      0.955 1.000 0.000
#> GSM701788     1  0.0000      0.955 1.000 0.000
#> GSM701787     2  0.0000      0.965 0.000 1.000
#> GSM701824     1  0.0000      0.955 1.000 0.000
#> GSM701823     2  0.0000      0.965 0.000 1.000
#> GSM701791     2  0.0000      0.965 0.000 1.000
#> GSM701793     1  0.0000      0.955 1.000 0.000
#> GSM701792     2  0.8144      0.665 0.252 0.748
#> GSM701825     1  0.0000      0.955 1.000 0.000
#> GSM701827     2  0.0000      0.965 0.000 1.000
#> GSM701826     2  0.0000      0.965 0.000 1.000
#> GSM701797     1  0.9944      0.163 0.544 0.456
#> GSM701796     1  0.0000      0.955 1.000 0.000
#> GSM701795     2  0.0000      0.965 0.000 1.000
#> GSM701794     2  0.0000      0.965 0.000 1.000
#> GSM701831     2  0.0000      0.965 0.000 1.000
#> GSM701830     2  0.0000      0.965 0.000 1.000
#> GSM701829     2  0.2603      0.930 0.044 0.956
#> GSM701828     2  0.0000      0.965 0.000 1.000
#> GSM701798     2  0.0000      0.965 0.000 1.000
#> GSM701802     2  0.0000      0.965 0.000 1.000
#> GSM701801     1  0.0672      0.949 0.992 0.008
#> GSM701800     1  0.7745      0.704 0.772 0.228
#> GSM701799     2  0.0000      0.965 0.000 1.000
#> GSM701832     2  0.0000      0.965 0.000 1.000
#> GSM701835     2  0.7528      0.729 0.216 0.784
#> GSM701834     2  0.0000      0.965 0.000 1.000
#> GSM701833     2  0.0000      0.965 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n  age(p) time(p) tissue(p) individual(p) k
#> SD:skmeans 68 0.01545  0.0658  1.00e+00      0.016817 2
#> SD:skmeans 47 0.00746  0.1401  2.09e-04      0.001272 3
#> SD:skmeans 41 0.00692  0.1638  8.92e-05      0.000646 4
#> SD:skmeans 26 0.02420  0.0540  4.21e-02      0.023359 5
#> SD:skmeans 22 0.93966  0.0992  3.79e-01      0.272115 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "pam"]
# you can also extract it by
# res = res_list["SD:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.691           0.857       0.936         0.5019 0.493   0.493
#> 3 3 0.566           0.779       0.877         0.2912 0.860   0.719
#> 4 4 0.575           0.541       0.776         0.1132 0.844   0.605
#> 5 5 0.562           0.556       0.769         0.0369 0.932   0.775
#> 6 6 0.552           0.490       0.763         0.0155 0.961   0.860

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000     0.9217 1.000 0.000
#> GSM701769     1  0.0000     0.9217 1.000 0.000
#> GSM701768     1  0.0000     0.9217 1.000 0.000
#> GSM701767     2  0.9795     0.2973 0.416 0.584
#> GSM701766     2  0.8499     0.6424 0.276 0.724
#> GSM701806     1  0.0000     0.9217 1.000 0.000
#> GSM701805     1  0.0000     0.9217 1.000 0.000
#> GSM701804     1  0.0376     0.9197 0.996 0.004
#> GSM701803     1  0.0672     0.9179 0.992 0.008
#> GSM701775     1  0.0000     0.9217 1.000 0.000
#> GSM701774     1  0.0672     0.9183 0.992 0.008
#> GSM701773     2  0.0000     0.9345 0.000 1.000
#> GSM701772     1  0.8386     0.6484 0.732 0.268
#> GSM701771     1  0.0000     0.9217 1.000 0.000
#> GSM701810     1  0.0000     0.9217 1.000 0.000
#> GSM701809     2  0.2948     0.9085 0.052 0.948
#> GSM701808     1  0.0000     0.9217 1.000 0.000
#> GSM701807     1  0.0000     0.9217 1.000 0.000
#> GSM701780     1  0.6343     0.8014 0.840 0.160
#> GSM701779     2  0.0000     0.9345 0.000 1.000
#> GSM701778     2  0.0376     0.9331 0.004 0.996
#> GSM701777     2  0.3431     0.9052 0.064 0.936
#> GSM701776     1  0.0000     0.9217 1.000 0.000
#> GSM701816     1  0.3733     0.8814 0.928 0.072
#> GSM701815     2  0.0000     0.9345 0.000 1.000
#> GSM701814     2  0.0000     0.9345 0.000 1.000
#> GSM701813     2  0.8144     0.6780 0.252 0.748
#> GSM701812     1  0.8555     0.6089 0.720 0.280
#> GSM701811     1  0.0000     0.9217 1.000 0.000
#> GSM701786     1  0.0000     0.9217 1.000 0.000
#> GSM701785     2  0.0000     0.9345 0.000 1.000
#> GSM701784     2  0.4939     0.8665 0.108 0.892
#> GSM701783     1  0.0000     0.9217 1.000 0.000
#> GSM701782     2  0.0000     0.9345 0.000 1.000
#> GSM701781     2  0.3274     0.9068 0.060 0.940
#> GSM701822     2  0.0000     0.9345 0.000 1.000
#> GSM701821     2  0.0000     0.9345 0.000 1.000
#> GSM701820     1  0.9710     0.3145 0.600 0.400
#> GSM701819     1  0.1633     0.9080 0.976 0.024
#> GSM701818     1  0.0000     0.9217 1.000 0.000
#> GSM701817     1  0.9998    -0.0308 0.508 0.492
#> GSM701790     1  0.5629     0.8245 0.868 0.132
#> GSM701789     1  0.0000     0.9217 1.000 0.000
#> GSM701788     1  0.0000     0.9217 1.000 0.000
#> GSM701787     2  0.9170     0.5173 0.332 0.668
#> GSM701824     1  0.0000     0.9217 1.000 0.000
#> GSM701823     2  0.0000     0.9345 0.000 1.000
#> GSM701791     2  0.0000     0.9345 0.000 1.000
#> GSM701793     1  0.0000     0.9217 1.000 0.000
#> GSM701792     1  0.6623     0.7821 0.828 0.172
#> GSM701825     1  0.0000     0.9217 1.000 0.000
#> GSM701827     2  0.0000     0.9345 0.000 1.000
#> GSM701826     2  0.3114     0.9074 0.056 0.944
#> GSM701797     2  0.3879     0.8967 0.076 0.924
#> GSM701796     1  0.4298     0.8646 0.912 0.088
#> GSM701795     2  0.0000     0.9345 0.000 1.000
#> GSM701794     2  0.0000     0.9345 0.000 1.000
#> GSM701831     2  0.0000     0.9345 0.000 1.000
#> GSM701830     2  0.0000     0.9345 0.000 1.000
#> GSM701829     2  0.5059     0.8667 0.112 0.888
#> GSM701828     2  0.0000     0.9345 0.000 1.000
#> GSM701798     2  0.0000     0.9345 0.000 1.000
#> GSM701802     2  0.0376     0.9331 0.004 0.996
#> GSM701801     1  0.7883     0.7024 0.764 0.236
#> GSM701800     2  0.7815     0.7173 0.232 0.768
#> GSM701799     2  0.0000     0.9345 0.000 1.000
#> GSM701832     2  0.0376     0.9332 0.004 0.996
#> GSM701835     2  0.4161     0.8897 0.084 0.916
#> GSM701834     2  0.0000     0.9345 0.000 1.000
#> GSM701833     2  0.0000     0.9345 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n age(p) time(p) tissue(p) individual(p) k
#> SD:pam 67 0.0317 0.00470  0.880694       0.02064 2
#> SD:pam 65 0.0115 0.01632  0.000484       0.00289 3
#> SD:pam 45 0.3020 0.00299  0.003044       0.05731 4
#> SD:pam 47 0.3684 0.00274  0.010274       0.18506 5
#> SD:pam 40 0.4878 0.02478  0.000390       0.34506 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "mclust"]
# you can also extract it by
# res = res_list["SD:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.448           0.704       0.822         0.4837 0.499   0.499
#> 3 3 0.350           0.696       0.737         0.2574 1.000   1.000
#> 4 4 0.430           0.459       0.639         0.1580 0.771   0.547
#> 5 5 0.515           0.566       0.708         0.0733 0.855   0.550
#> 6 6 0.661           0.640       0.778         0.0607 0.946   0.759

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.804 1.000 0.000
#> GSM701769     1  0.0000      0.804 1.000 0.000
#> GSM701768     1  0.0000      0.804 1.000 0.000
#> GSM701767     1  0.0000      0.804 1.000 0.000
#> GSM701766     1  0.1633      0.797 0.976 0.024
#> GSM701806     1  0.4939      0.667 0.892 0.108
#> GSM701805     2  0.9754      0.715 0.408 0.592
#> GSM701804     2  0.9754      0.715 0.408 0.592
#> GSM701803     2  0.9754      0.715 0.408 0.592
#> GSM701775     1  0.0000      0.804 1.000 0.000
#> GSM701774     1  0.0938      0.795 0.988 0.012
#> GSM701773     1  0.9754      0.537 0.592 0.408
#> GSM701772     1  0.0376      0.803 0.996 0.004
#> GSM701771     1  0.0376      0.801 0.996 0.004
#> GSM701810     2  0.9754      0.715 0.408 0.592
#> GSM701809     2  0.9754      0.715 0.408 0.592
#> GSM701808     2  0.9754      0.715 0.408 0.592
#> GSM701807     2  0.9754      0.715 0.408 0.592
#> GSM701780     1  0.0000      0.804 1.000 0.000
#> GSM701779     1  0.9815      0.526 0.580 0.420
#> GSM701778     1  0.9754      0.537 0.592 0.408
#> GSM701777     1  0.1843      0.795 0.972 0.028
#> GSM701776     2  0.9754      0.715 0.408 0.592
#> GSM701816     2  0.9754      0.715 0.408 0.592
#> GSM701815     2  0.1633      0.660 0.024 0.976
#> GSM701814     2  0.0000      0.653 0.000 1.000
#> GSM701813     2  0.9754      0.715 0.408 0.592
#> GSM701812     2  0.9754      0.715 0.408 0.592
#> GSM701811     1  0.0376      0.801 0.996 0.004
#> GSM701786     1  0.0000      0.804 1.000 0.000
#> GSM701785     1  0.9710      0.543 0.600 0.400
#> GSM701784     1  0.9044      0.602 0.680 0.320
#> GSM701783     1  0.0000      0.804 1.000 0.000
#> GSM701782     1  0.9129      0.596 0.672 0.328
#> GSM701781     1  0.6148      0.581 0.848 0.152
#> GSM701822     2  0.0000      0.653 0.000 1.000
#> GSM701821     2  0.0000      0.653 0.000 1.000
#> GSM701820     2  0.9754      0.715 0.408 0.592
#> GSM701819     2  0.9754      0.715 0.408 0.592
#> GSM701818     2  0.9754      0.715 0.408 0.592
#> GSM701817     2  0.9754      0.715 0.408 0.592
#> GSM701790     1  0.0000      0.804 1.000 0.000
#> GSM701789     1  0.0000      0.804 1.000 0.000
#> GSM701788     1  0.0000      0.804 1.000 0.000
#> GSM701787     1  0.4690      0.755 0.900 0.100
#> GSM701824     2  0.9815      0.698 0.420 0.580
#> GSM701823     2  0.2948      0.663 0.052 0.948
#> GSM701791     1  0.9815      0.527 0.580 0.420
#> GSM701793     1  0.0000      0.804 1.000 0.000
#> GSM701792     1  0.0376      0.803 0.996 0.004
#> GSM701825     2  0.9754      0.715 0.408 0.592
#> GSM701827     2  0.0000      0.653 0.000 1.000
#> GSM701826     2  0.0000      0.653 0.000 1.000
#> GSM701797     1  0.0376      0.803 0.996 0.004
#> GSM701796     1  0.0000      0.804 1.000 0.000
#> GSM701795     1  0.9754      0.537 0.592 0.408
#> GSM701794     1  0.9754      0.537 0.592 0.408
#> GSM701831     2  0.0938      0.650 0.012 0.988
#> GSM701830     2  0.0000      0.653 0.000 1.000
#> GSM701829     2  0.9580      0.709 0.380 0.620
#> GSM701828     2  0.1843      0.660 0.028 0.972
#> GSM701798     1  0.9754      0.537 0.592 0.408
#> GSM701802     1  0.6247      0.719 0.844 0.156
#> GSM701801     1  0.0000      0.804 1.000 0.000
#> GSM701800     1  0.0938      0.794 0.988 0.012
#> GSM701799     1  0.9754      0.537 0.592 0.408
#> GSM701832     2  0.1633      0.646 0.024 0.976
#> GSM701835     1  0.3733      0.723 0.928 0.072
#> GSM701834     2  0.0000      0.653 0.000 1.000
#> GSM701833     2  0.0000      0.653 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n age(p) time(p) tissue(p) individual(p) k
#> SD:mclust 70 0.8670  0.7250  7.21e-13       0.77488 2
#> SD:mclust 69 0.7954  0.7317  1.16e-12       0.71949 3
#> SD:mclust 32 0.0166  0.1492  1.24e-06       0.03293 4
#> SD:mclust 55 0.0146  0.1705  4.96e-09       0.00163 5
#> SD:mclust 60 0.0279  0.0647  1.75e-09       0.08518 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "NMF"]
# you can also extract it by
# res = res_list["SD:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'SD' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.597           0.812       0.920         0.4980 0.496   0.496
#> 3 3 0.533           0.736       0.859         0.3341 0.735   0.516
#> 4 4 0.488           0.505       0.726         0.1058 0.868   0.641
#> 5 5 0.473           0.434       0.660         0.0538 0.919   0.729
#> 6 6 0.494           0.369       0.600         0.0329 0.976   0.906

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000     0.9181 1.000 0.000
#> GSM701769     1  0.0000     0.9181 1.000 0.000
#> GSM701768     1  0.0672     0.9151 0.992 0.008
#> GSM701767     1  0.9954     0.0468 0.540 0.460
#> GSM701766     2  0.6623     0.7697 0.172 0.828
#> GSM701806     1  0.0000     0.9181 1.000 0.000
#> GSM701805     1  0.0000     0.9181 1.000 0.000
#> GSM701804     2  0.7299     0.7287 0.204 0.796
#> GSM701803     1  0.9491     0.4138 0.632 0.368
#> GSM701775     1  0.0000     0.9181 1.000 0.000
#> GSM701774     1  0.1414     0.9091 0.980 0.020
#> GSM701773     2  0.0000     0.8966 0.000 1.000
#> GSM701772     1  0.7056     0.7319 0.808 0.192
#> GSM701771     1  0.0000     0.9181 1.000 0.000
#> GSM701810     1  0.0000     0.9181 1.000 0.000
#> GSM701809     2  0.8207     0.6537 0.256 0.744
#> GSM701808     1  0.5737     0.8161 0.864 0.136
#> GSM701807     1  0.3584     0.8772 0.932 0.068
#> GSM701780     1  0.0000     0.9181 1.000 0.000
#> GSM701779     2  0.0000     0.8966 0.000 1.000
#> GSM701778     2  0.0000     0.8966 0.000 1.000
#> GSM701777     2  0.3879     0.8570 0.076 0.924
#> GSM701776     1  0.0000     0.9181 1.000 0.000
#> GSM701816     2  0.9896     0.2337 0.440 0.560
#> GSM701815     2  0.0000     0.8966 0.000 1.000
#> GSM701814     2  0.0000     0.8966 0.000 1.000
#> GSM701813     2  0.9922     0.1994 0.448 0.552
#> GSM701812     1  0.7219     0.7383 0.800 0.200
#> GSM701811     1  0.0000     0.9181 1.000 0.000
#> GSM701786     1  0.0000     0.9181 1.000 0.000
#> GSM701785     2  0.2043     0.8840 0.032 0.968
#> GSM701784     2  0.2043     0.8839 0.032 0.968
#> GSM701783     1  0.0000     0.9181 1.000 0.000
#> GSM701782     2  0.0000     0.8966 0.000 1.000
#> GSM701781     2  0.0938     0.8924 0.012 0.988
#> GSM701822     2  0.0000     0.8966 0.000 1.000
#> GSM701821     2  0.0000     0.8966 0.000 1.000
#> GSM701820     2  0.9460     0.4406 0.364 0.636
#> GSM701819     1  0.6973     0.7545 0.812 0.188
#> GSM701818     1  0.5294     0.8327 0.880 0.120
#> GSM701817     2  0.9922     0.2018 0.448 0.552
#> GSM701790     1  0.0000     0.9181 1.000 0.000
#> GSM701789     1  0.0000     0.9181 1.000 0.000
#> GSM701788     1  0.0000     0.9181 1.000 0.000
#> GSM701787     2  0.7528     0.7147 0.216 0.784
#> GSM701824     1  0.2043     0.9024 0.968 0.032
#> GSM701823     2  0.0000     0.8966 0.000 1.000
#> GSM701791     2  0.0000     0.8966 0.000 1.000
#> GSM701793     1  0.0000     0.9181 1.000 0.000
#> GSM701792     1  0.3114     0.8847 0.944 0.056
#> GSM701825     2  0.9323     0.4769 0.348 0.652
#> GSM701827     2  0.0000     0.8966 0.000 1.000
#> GSM701826     2  0.0000     0.8966 0.000 1.000
#> GSM701797     1  0.9129     0.4690 0.672 0.328
#> GSM701796     1  0.0000     0.9181 1.000 0.000
#> GSM701795     2  0.0000     0.8966 0.000 1.000
#> GSM701794     2  0.0000     0.8966 0.000 1.000
#> GSM701831     2  0.0000     0.8966 0.000 1.000
#> GSM701830     2  0.0000     0.8966 0.000 1.000
#> GSM701829     2  0.2603     0.8762 0.044 0.956
#> GSM701828     2  0.0376     0.8952 0.004 0.996
#> GSM701798     2  0.0000     0.8966 0.000 1.000
#> GSM701802     2  0.5059     0.8305 0.112 0.888
#> GSM701801     1  0.0000     0.9181 1.000 0.000
#> GSM701800     1  0.0672     0.9151 0.992 0.008
#> GSM701799     2  0.0000     0.8966 0.000 1.000
#> GSM701832     2  0.0000     0.8966 0.000 1.000
#> GSM701835     2  0.7299     0.7403 0.204 0.796
#> GSM701834     2  0.0000     0.8966 0.000 1.000
#> GSM701833     2  0.0000     0.8966 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n  age(p)  time(p) tissue(p) individual(p) k
#> SD:NMF 62 0.10821 0.053378  2.74e-01        0.0416 2
#> SD:NMF 62 0.00745 0.053377  6.97e-07        0.0166 3
#> SD:NMF 39 0.59482 0.000192  2.63e-02        0.1367 4
#> SD:NMF 35 0.36948 0.001026  3.11e-02        0.1266 5
#> SD:NMF 24 1.00000 0.001899  2.14e-01        0.3795 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "hclust"]
# you can also extract it by
# res = res_list["CV:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.352           0.715       0.863         0.4755 0.508   0.508
#> 3 3 0.288           0.521       0.775         0.2080 0.983   0.967
#> 4 4 0.290           0.478       0.726         0.0978 0.851   0.724
#> 5 5 0.342           0.514       0.676         0.0830 0.861   0.692
#> 6 6 0.387           0.417       0.631         0.0539 0.933   0.801

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0376     0.8248 0.996 0.004
#> GSM701769     1  0.2778     0.8288 0.952 0.048
#> GSM701768     1  0.4298     0.8201 0.912 0.088
#> GSM701767     2  0.9977     0.0293 0.472 0.528
#> GSM701766     1  0.9988     0.1112 0.520 0.480
#> GSM701806     1  0.0000     0.8228 1.000 0.000
#> GSM701805     1  0.1414     0.8293 0.980 0.020
#> GSM701804     1  0.9866     0.3105 0.568 0.432
#> GSM701803     1  0.5408     0.7992 0.876 0.124
#> GSM701775     1  0.4161     0.8173 0.916 0.084
#> GSM701774     1  0.4815     0.8117 0.896 0.104
#> GSM701773     2  0.0938     0.8498 0.012 0.988
#> GSM701772     1  0.9944     0.2036 0.544 0.456
#> GSM701771     1  0.0376     0.8248 0.996 0.004
#> GSM701810     1  0.1184     0.8289 0.984 0.016
#> GSM701809     1  0.5519     0.7998 0.872 0.128
#> GSM701808     1  0.1184     0.8286 0.984 0.016
#> GSM701807     1  0.0000     0.8228 1.000 0.000
#> GSM701780     1  0.6712     0.7599 0.824 0.176
#> GSM701779     2  0.0000     0.8442 0.000 1.000
#> GSM701778     2  0.1633     0.8527 0.024 0.976
#> GSM701777     2  0.8861     0.5921 0.304 0.696
#> GSM701776     1  0.0000     0.8228 1.000 0.000
#> GSM701816     1  0.9686     0.4055 0.604 0.396
#> GSM701815     2  0.4161     0.8358 0.084 0.916
#> GSM701814     2  0.1414     0.8518 0.020 0.980
#> GSM701813     1  0.9460     0.4909 0.636 0.364
#> GSM701812     1  0.6887     0.7566 0.816 0.184
#> GSM701811     1  0.2948     0.8291 0.948 0.052
#> GSM701786     1  0.0000     0.8228 1.000 0.000
#> GSM701785     2  0.5059     0.8210 0.112 0.888
#> GSM701784     2  0.8267     0.6756 0.260 0.740
#> GSM701783     1  0.0672     0.8265 0.992 0.008
#> GSM701782     2  0.8861     0.5896 0.304 0.696
#> GSM701781     2  0.9552     0.4044 0.376 0.624
#> GSM701822     2  0.1414     0.8517 0.020 0.980
#> GSM701821     2  0.6712     0.7772 0.176 0.824
#> GSM701820     1  0.4431     0.8177 0.908 0.092
#> GSM701819     1  0.2423     0.8300 0.960 0.040
#> GSM701818     1  0.0938     0.8279 0.988 0.012
#> GSM701817     1  0.7453     0.7222 0.788 0.212
#> GSM701790     1  0.0938     0.8273 0.988 0.012
#> GSM701789     1  0.2043     0.8309 0.968 0.032
#> GSM701788     1  0.0000     0.8228 1.000 0.000
#> GSM701787     2  0.8499     0.6464 0.276 0.724
#> GSM701824     1  0.1633     0.8298 0.976 0.024
#> GSM701823     2  0.2948     0.8465 0.052 0.948
#> GSM701791     2  0.0376     0.8464 0.004 0.996
#> GSM701793     1  0.0376     0.8247 0.996 0.004
#> GSM701792     1  0.7745     0.7057 0.772 0.228
#> GSM701825     1  0.3733     0.8150 0.928 0.072
#> GSM701827     2  0.0000     0.8442 0.000 1.000
#> GSM701826     2  0.7950     0.7040 0.240 0.760
#> GSM701797     1  0.9608     0.4333 0.616 0.384
#> GSM701796     1  0.2236     0.8299 0.964 0.036
#> GSM701795     2  0.1843     0.8526 0.028 0.972
#> GSM701794     2  0.0000     0.8442 0.000 1.000
#> GSM701831     2  0.2236     0.8522 0.036 0.964
#> GSM701830     2  0.1184     0.8502 0.016 0.984
#> GSM701829     1  0.9933     0.2190 0.548 0.452
#> GSM701828     2  0.7602     0.7322 0.220 0.780
#> GSM701798     2  0.2236     0.8519 0.036 0.964
#> GSM701802     2  0.8081     0.6959 0.248 0.752
#> GSM701801     1  0.9087     0.5606 0.676 0.324
#> GSM701800     1  0.9970     0.1625 0.532 0.468
#> GSM701799     2  0.0000     0.8442 0.000 1.000
#> GSM701832     2  0.6801     0.7747 0.180 0.820
#> GSM701835     1  0.9970     0.1615 0.532 0.468
#> GSM701834     2  0.2236     0.8522 0.036 0.964
#> GSM701833     2  0.0000     0.8442 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n age(p) time(p) tissue(p) individual(p) k
#> CV:hclust 59 0.0318 0.01766     1.000       0.01550 2
#> CV:hclust 46 0.0357 0.00636     0.546       0.03278 3
#> CV:hclust 33 0.9175 0.00944     0.366       0.03446 4
#> CV:hclust 46 0.5171 0.04424     0.458       0.02388 5
#> CV:hclust 29 0.2603 0.01696     0.772       0.00511 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:kmeans*

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "kmeans"]
# you can also extract it by
# res = res_list["CV:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.940           0.940       0.974         0.4928 0.508   0.508
#> 3 3 0.527           0.720       0.852         0.3464 0.738   0.521
#> 4 4 0.552           0.570       0.754         0.1047 0.890   0.694
#> 5 5 0.563           0.454       0.708         0.0553 0.925   0.744
#> 6 6 0.588           0.491       0.712         0.0370 0.886   0.578

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.972 1.000 0.000
#> GSM701769     1  0.0000      0.972 1.000 0.000
#> GSM701768     1  0.0000      0.972 1.000 0.000
#> GSM701767     1  0.5629      0.848 0.868 0.132
#> GSM701766     2  0.5946      0.828 0.144 0.856
#> GSM701806     1  0.0000      0.972 1.000 0.000
#> GSM701805     1  0.0000      0.972 1.000 0.000
#> GSM701804     1  0.0000      0.972 1.000 0.000
#> GSM701803     1  0.0000      0.972 1.000 0.000
#> GSM701775     1  0.0000      0.972 1.000 0.000
#> GSM701774     1  0.0000      0.972 1.000 0.000
#> GSM701773     2  0.0000      0.973 0.000 1.000
#> GSM701772     1  0.1843      0.956 0.972 0.028
#> GSM701771     1  0.0000      0.972 1.000 0.000
#> GSM701810     1  0.0000      0.972 1.000 0.000
#> GSM701809     1  0.0672      0.968 0.992 0.008
#> GSM701808     1  0.0000      0.972 1.000 0.000
#> GSM701807     1  0.0000      0.972 1.000 0.000
#> GSM701780     1  0.0000      0.972 1.000 0.000
#> GSM701779     2  0.0000      0.973 0.000 1.000
#> GSM701778     2  0.0000      0.973 0.000 1.000
#> GSM701777     2  0.5629      0.843 0.132 0.868
#> GSM701776     1  0.0000      0.972 1.000 0.000
#> GSM701816     1  0.2948      0.936 0.948 0.052
#> GSM701815     2  0.0000      0.973 0.000 1.000
#> GSM701814     2  0.0000      0.973 0.000 1.000
#> GSM701813     1  0.1184      0.963 0.984 0.016
#> GSM701812     1  0.2043      0.953 0.968 0.032
#> GSM701811     1  0.0000      0.972 1.000 0.000
#> GSM701786     1  0.0000      0.972 1.000 0.000
#> GSM701785     2  0.0376      0.970 0.004 0.996
#> GSM701784     2  0.1633      0.954 0.024 0.976
#> GSM701783     1  0.0000      0.972 1.000 0.000
#> GSM701782     2  0.0000      0.973 0.000 1.000
#> GSM701781     1  0.9795      0.293 0.584 0.416
#> GSM701822     2  0.0000      0.973 0.000 1.000
#> GSM701821     2  0.0000      0.973 0.000 1.000
#> GSM701820     1  0.0000      0.972 1.000 0.000
#> GSM701819     1  0.0000      0.972 1.000 0.000
#> GSM701818     1  0.0000      0.972 1.000 0.000
#> GSM701817     1  0.0000      0.972 1.000 0.000
#> GSM701790     1  0.0000      0.972 1.000 0.000
#> GSM701789     1  0.0000      0.972 1.000 0.000
#> GSM701788     1  0.0000      0.972 1.000 0.000
#> GSM701787     2  0.0672      0.968 0.008 0.992
#> GSM701824     1  0.0000      0.972 1.000 0.000
#> GSM701823     2  0.0000      0.973 0.000 1.000
#> GSM701791     2  0.0000      0.973 0.000 1.000
#> GSM701793     1  0.0000      0.972 1.000 0.000
#> GSM701792     1  0.1633      0.958 0.976 0.024
#> GSM701825     1  0.0000      0.972 1.000 0.000
#> GSM701827     2  0.0000      0.973 0.000 1.000
#> GSM701826     2  0.0000      0.973 0.000 1.000
#> GSM701797     1  0.3431      0.925 0.936 0.064
#> GSM701796     1  0.0000      0.972 1.000 0.000
#> GSM701795     2  0.0000      0.973 0.000 1.000
#> GSM701794     2  0.0000      0.973 0.000 1.000
#> GSM701831     2  0.0000      0.973 0.000 1.000
#> GSM701830     2  0.0000      0.973 0.000 1.000
#> GSM701829     2  0.9754      0.300 0.408 0.592
#> GSM701828     2  0.0938      0.965 0.012 0.988
#> GSM701798     2  0.0000      0.973 0.000 1.000
#> GSM701802     2  0.0000      0.973 0.000 1.000
#> GSM701801     1  0.1633      0.959 0.976 0.024
#> GSM701800     1  0.1184      0.964 0.984 0.016
#> GSM701799     2  0.0000      0.973 0.000 1.000
#> GSM701832     2  0.0000      0.973 0.000 1.000
#> GSM701835     1  0.8443      0.629 0.728 0.272
#> GSM701834     2  0.0000      0.973 0.000 1.000
#> GSM701833     2  0.0000      0.973 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n age(p) time(p) tissue(p) individual(p) k
#> CV:kmeans 68 0.0468  0.0659    1.0000       0.00974 2
#> CV:kmeans 62 0.0605  0.0247    0.5465       0.04064 3
#> CV:kmeans 46 0.2457  0.0260    0.0429       0.09289 4
#> CV:kmeans 33 0.3885  0.1765    0.2981       0.01281 5
#> CV:kmeans 34 0.1492  0.0748    0.3030       0.00271 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "skmeans"]
# you can also extract it by
# res = res_list["CV:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.691           0.864       0.941         0.5059 0.494   0.494
#> 3 3 0.392           0.591       0.751         0.2985 0.853   0.710
#> 4 4 0.377           0.417       0.624         0.1179 0.931   0.821
#> 5 5 0.392           0.294       0.558         0.0659 0.923   0.773
#> 6 6 0.413           0.252       0.508         0.0413 0.932   0.770

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.932 1.000 0.000
#> GSM701769     1  0.0000      0.932 1.000 0.000
#> GSM701768     1  0.0000      0.932 1.000 0.000
#> GSM701767     2  0.9933      0.171 0.452 0.548
#> GSM701766     2  0.4161      0.875 0.084 0.916
#> GSM701806     1  0.0000      0.932 1.000 0.000
#> GSM701805     1  0.0000      0.932 1.000 0.000
#> GSM701804     1  0.2236      0.912 0.964 0.036
#> GSM701803     1  0.0376      0.931 0.996 0.004
#> GSM701775     1  0.0000      0.932 1.000 0.000
#> GSM701774     1  0.0376      0.931 0.996 0.004
#> GSM701773     2  0.0000      0.938 0.000 1.000
#> GSM701772     1  0.8555      0.636 0.720 0.280
#> GSM701771     1  0.0000      0.932 1.000 0.000
#> GSM701810     1  0.0000      0.932 1.000 0.000
#> GSM701809     1  0.8267      0.666 0.740 0.260
#> GSM701808     1  0.0000      0.932 1.000 0.000
#> GSM701807     1  0.0000      0.932 1.000 0.000
#> GSM701780     1  0.0376      0.931 0.996 0.004
#> GSM701779     2  0.0000      0.938 0.000 1.000
#> GSM701778     2  0.0000      0.938 0.000 1.000
#> GSM701777     2  0.4690      0.859 0.100 0.900
#> GSM701776     1  0.0000      0.932 1.000 0.000
#> GSM701816     1  0.8016      0.700 0.756 0.244
#> GSM701815     2  0.0000      0.938 0.000 1.000
#> GSM701814     2  0.0000      0.938 0.000 1.000
#> GSM701813     1  0.8861      0.581 0.696 0.304
#> GSM701812     1  0.6148      0.816 0.848 0.152
#> GSM701811     1  0.0000      0.932 1.000 0.000
#> GSM701786     1  0.0000      0.932 1.000 0.000
#> GSM701785     2  0.0000      0.938 0.000 1.000
#> GSM701784     2  0.0000      0.938 0.000 1.000
#> GSM701783     1  0.0000      0.932 1.000 0.000
#> GSM701782     2  0.0000      0.938 0.000 1.000
#> GSM701781     2  0.8386      0.632 0.268 0.732
#> GSM701822     2  0.0000      0.938 0.000 1.000
#> GSM701821     2  0.0000      0.938 0.000 1.000
#> GSM701820     1  0.4939      0.856 0.892 0.108
#> GSM701819     1  0.0000      0.932 1.000 0.000
#> GSM701818     1  0.0000      0.932 1.000 0.000
#> GSM701817     1  0.1414      0.922 0.980 0.020
#> GSM701790     1  0.0000      0.932 1.000 0.000
#> GSM701789     1  0.0000      0.932 1.000 0.000
#> GSM701788     1  0.0000      0.932 1.000 0.000
#> GSM701787     2  0.1843      0.919 0.028 0.972
#> GSM701824     1  0.0000      0.932 1.000 0.000
#> GSM701823     2  0.0000      0.938 0.000 1.000
#> GSM701791     2  0.0000      0.938 0.000 1.000
#> GSM701793     1  0.0000      0.932 1.000 0.000
#> GSM701792     1  0.9129      0.540 0.672 0.328
#> GSM701825     1  0.0000      0.932 1.000 0.000
#> GSM701827     2  0.0000      0.938 0.000 1.000
#> GSM701826     2  0.0000      0.938 0.000 1.000
#> GSM701797     2  0.9909      0.187 0.444 0.556
#> GSM701796     1  0.0000      0.932 1.000 0.000
#> GSM701795     2  0.0000      0.938 0.000 1.000
#> GSM701794     2  0.0000      0.938 0.000 1.000
#> GSM701831     2  0.0000      0.938 0.000 1.000
#> GSM701830     2  0.0000      0.938 0.000 1.000
#> GSM701829     2  0.6623      0.777 0.172 0.828
#> GSM701828     2  0.0376      0.935 0.004 0.996
#> GSM701798     2  0.0000      0.938 0.000 1.000
#> GSM701802     2  0.0000      0.938 0.000 1.000
#> GSM701801     1  0.6048      0.818 0.852 0.148
#> GSM701800     1  0.9580      0.406 0.620 0.380
#> GSM701799     2  0.0000      0.938 0.000 1.000
#> GSM701832     2  0.0000      0.938 0.000 1.000
#> GSM701835     2  0.8909      0.559 0.308 0.692
#> GSM701834     2  0.0000      0.938 0.000 1.000
#> GSM701833     2  0.0000      0.938 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n age(p) time(p) tissue(p) individual(p) k
#> CV:skmeans 67 0.0124 0.01686     1.000        0.0119 2
#> CV:skmeans 49 0.0686 0.00576     0.442        0.2858 3
#> CV:skmeans 30 0.0646 0.17840     0.680        0.0768 4
#> CV:skmeans 19     NA      NA        NA            NA 5
#> CV:skmeans 16     NA      NA        NA            NA 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "pam"]
# you can also extract it by
# res = res_list["CV:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.418           0.787       0.895         0.4844 0.499   0.499
#> 3 3 0.418           0.687       0.819         0.3754 0.738   0.517
#> 4 4 0.497           0.608       0.785         0.1058 0.885   0.671
#> 5 5 0.516           0.585       0.775         0.0204 0.982   0.931
#> 6 6 0.520           0.580       0.752         0.0149 1.000   1.000

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.861 1.000 0.000
#> GSM701769     1  0.2778      0.858 0.952 0.048
#> GSM701768     1  0.1184      0.864 0.984 0.016
#> GSM701767     2  0.6343      0.817 0.160 0.840
#> GSM701766     2  0.6623      0.807 0.172 0.828
#> GSM701806     1  0.0000      0.861 1.000 0.000
#> GSM701805     1  0.1184      0.864 0.984 0.016
#> GSM701804     1  0.9286      0.526 0.656 0.344
#> GSM701803     2  0.9491      0.410 0.368 0.632
#> GSM701775     1  0.1414      0.863 0.980 0.020
#> GSM701774     1  0.1633      0.863 0.976 0.024
#> GSM701773     2  0.0000      0.887 0.000 1.000
#> GSM701772     1  0.9170      0.528 0.668 0.332
#> GSM701771     1  0.0000      0.861 1.000 0.000
#> GSM701810     1  0.1414      0.863 0.980 0.020
#> GSM701809     2  0.5408      0.843 0.124 0.876
#> GSM701808     1  0.3274      0.852 0.940 0.060
#> GSM701807     1  0.0672      0.863 0.992 0.008
#> GSM701780     1  0.9866      0.261 0.568 0.432
#> GSM701779     2  0.0000      0.887 0.000 1.000
#> GSM701778     2  0.0000      0.887 0.000 1.000
#> GSM701777     2  0.6887      0.797 0.184 0.816
#> GSM701776     1  0.1414      0.863 0.980 0.020
#> GSM701816     2  0.8955      0.537 0.312 0.688
#> GSM701815     2  0.0000      0.887 0.000 1.000
#> GSM701814     2  0.0000      0.887 0.000 1.000
#> GSM701813     2  0.7674      0.738 0.224 0.776
#> GSM701812     1  0.9963      0.149 0.536 0.464
#> GSM701811     1  0.3584      0.850 0.932 0.068
#> GSM701786     1  0.0000      0.861 1.000 0.000
#> GSM701785     2  0.2236      0.879 0.036 0.964
#> GSM701784     2  0.6973      0.794 0.188 0.812
#> GSM701783     1  0.0376      0.862 0.996 0.004
#> GSM701782     2  0.0938      0.885 0.012 0.988
#> GSM701781     2  0.3733      0.868 0.072 0.928
#> GSM701822     2  0.0000      0.887 0.000 1.000
#> GSM701821     2  0.0000      0.887 0.000 1.000
#> GSM701820     1  0.9850      0.308 0.572 0.428
#> GSM701819     1  0.5946      0.805 0.856 0.144
#> GSM701818     1  0.8081      0.711 0.752 0.248
#> GSM701817     1  0.7528      0.734 0.784 0.216
#> GSM701790     1  0.5059      0.823 0.888 0.112
#> GSM701789     1  0.0000      0.861 1.000 0.000
#> GSM701788     1  0.0000      0.861 1.000 0.000
#> GSM701787     2  0.8386      0.672 0.268 0.732
#> GSM701824     1  0.0000      0.861 1.000 0.000
#> GSM701823     2  0.0000      0.887 0.000 1.000
#> GSM701791     2  0.0000      0.887 0.000 1.000
#> GSM701793     1  0.0000      0.861 1.000 0.000
#> GSM701792     1  0.8016      0.695 0.756 0.244
#> GSM701825     1  0.6531      0.784 0.832 0.168
#> GSM701827     2  0.0000      0.887 0.000 1.000
#> GSM701826     2  0.6801      0.785 0.180 0.820
#> GSM701797     2  0.5294      0.842 0.120 0.880
#> GSM701796     1  0.7602      0.706 0.780 0.220
#> GSM701795     2  0.0000      0.887 0.000 1.000
#> GSM701794     2  0.0000      0.887 0.000 1.000
#> GSM701831     2  0.0000      0.887 0.000 1.000
#> GSM701830     2  0.0000      0.887 0.000 1.000
#> GSM701829     2  0.7883      0.726 0.236 0.764
#> GSM701828     2  0.0376      0.886 0.004 0.996
#> GSM701798     2  0.0000      0.887 0.000 1.000
#> GSM701802     2  0.2603      0.877 0.044 0.956
#> GSM701801     2  0.9732      0.356 0.404 0.596
#> GSM701800     2  0.8861      0.607 0.304 0.696
#> GSM701799     2  0.0000      0.887 0.000 1.000
#> GSM701832     2  0.4161      0.862 0.084 0.916
#> GSM701835     2  0.7299      0.771 0.204 0.796
#> GSM701834     2  0.0000      0.887 0.000 1.000
#> GSM701833     2  0.0000      0.887 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n age(p) time(p) tissue(p) individual(p) k
#> CV:pam 65 0.0716 0.00214  0.831657        0.1029 2
#> CV:pam 60 0.1859 0.01079  0.001785        0.1046 3
#> CV:pam 54 0.0362 0.11800  0.000331        0.0100 4
#> CV:pam 51 0.0517 0.05744  0.000907        0.0101 5
#> CV:pam 51 0.0517 0.05744  0.000907        0.0101 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "mclust"]
# you can also extract it by
# res = res_list["CV:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.723           0.839       0.920         0.3817 0.627   0.627
#> 3 3 0.333           0.665       0.715         0.4500 0.795   0.699
#> 4 4 0.346           0.478       0.677         0.2418 0.704   0.494
#> 5 5 0.455           0.544       0.691         0.1061 0.753   0.386
#> 6 6 0.538           0.579       0.696         0.0566 0.946   0.773

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.924 1.000 0.000
#> GSM701769     1  0.0376      0.923 0.996 0.004
#> GSM701768     1  0.0000      0.924 1.000 0.000
#> GSM701767     1  0.0672      0.923 0.992 0.008
#> GSM701766     1  0.2423      0.909 0.960 0.040
#> GSM701806     1  0.1414      0.924 0.980 0.020
#> GSM701805     1  0.1414      0.924 0.980 0.020
#> GSM701804     1  0.2948      0.916 0.948 0.052
#> GSM701803     1  0.2778      0.917 0.952 0.048
#> GSM701775     1  0.0000      0.924 1.000 0.000
#> GSM701774     1  0.0000      0.924 1.000 0.000
#> GSM701773     2  0.3274      0.867 0.060 0.940
#> GSM701772     1  0.0672      0.923 0.992 0.008
#> GSM701771     1  0.0000      0.924 1.000 0.000
#> GSM701810     1  0.2948      0.916 0.948 0.052
#> GSM701809     1  0.3114      0.915 0.944 0.056
#> GSM701808     1  0.2948      0.916 0.948 0.052
#> GSM701807     1  0.2948      0.916 0.948 0.052
#> GSM701780     1  0.2236      0.923 0.964 0.036
#> GSM701779     2  0.3431      0.866 0.064 0.936
#> GSM701778     2  0.8081      0.709 0.248 0.752
#> GSM701777     1  0.3879      0.880 0.924 0.076
#> GSM701776     1  0.2948      0.916 0.948 0.052
#> GSM701816     1  0.1633      0.925 0.976 0.024
#> GSM701815     1  0.9491      0.432 0.632 0.368
#> GSM701814     2  0.0672      0.864 0.008 0.992
#> GSM701813     1  0.2948      0.916 0.948 0.052
#> GSM701812     1  0.3274      0.915 0.940 0.060
#> GSM701811     1  0.0376      0.923 0.996 0.004
#> GSM701786     1  0.0376      0.923 0.996 0.004
#> GSM701785     1  0.8813      0.524 0.700 0.300
#> GSM701784     1  0.5842      0.803 0.860 0.140
#> GSM701783     1  0.0672      0.924 0.992 0.008
#> GSM701782     1  0.9635      0.274 0.612 0.388
#> GSM701781     1  0.1843      0.913 0.972 0.028
#> GSM701822     2  0.0672      0.864 0.008 0.992
#> GSM701821     2  0.9491      0.405 0.368 0.632
#> GSM701820     1  0.2948      0.916 0.948 0.052
#> GSM701819     1  0.2948      0.916 0.948 0.052
#> GSM701818     1  0.2948      0.916 0.948 0.052
#> GSM701817     1  0.2948      0.916 0.948 0.052
#> GSM701790     1  0.0000      0.924 1.000 0.000
#> GSM701789     1  0.0000      0.924 1.000 0.000
#> GSM701788     1  0.0376      0.923 0.996 0.004
#> GSM701787     1  0.4161      0.868 0.916 0.084
#> GSM701824     1  0.2948      0.916 0.948 0.052
#> GSM701823     1  0.8861      0.612 0.696 0.304
#> GSM701791     2  0.3114      0.867 0.056 0.944
#> GSM701793     1  0.0000      0.924 1.000 0.000
#> GSM701792     1  0.0000      0.924 1.000 0.000
#> GSM701825     1  0.2948      0.916 0.948 0.052
#> GSM701827     2  0.0938      0.864 0.012 0.988
#> GSM701826     1  0.9460      0.485 0.636 0.364
#> GSM701797     1  0.0672      0.923 0.992 0.008
#> GSM701796     1  0.0000      0.924 1.000 0.000
#> GSM701795     2  0.3114      0.867 0.056 0.944
#> GSM701794     2  0.3114      0.867 0.056 0.944
#> GSM701831     2  0.9963      0.234 0.464 0.536
#> GSM701830     2  0.1184      0.866 0.016 0.984
#> GSM701829     1  0.3274      0.916 0.940 0.060
#> GSM701828     1  0.7056      0.797 0.808 0.192
#> GSM701798     2  0.6343      0.806 0.160 0.840
#> GSM701802     1  0.7219      0.722 0.800 0.200
#> GSM701801     1  0.0938      0.922 0.988 0.012
#> GSM701800     1  0.0938      0.922 0.988 0.012
#> GSM701799     2  0.3114      0.867 0.056 0.944
#> GSM701832     2  0.9944      0.219 0.456 0.544
#> GSM701835     1  0.0938      0.922 0.988 0.012
#> GSM701834     2  0.0672      0.864 0.008 0.992
#> GSM701833     2  0.0672      0.864 0.008 0.992

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n  age(p) time(p) tissue(p) individual(p) k
#> CV:mclust 64 0.08335  0.0954    1.0000        0.0969 2
#> CV:mclust 60 0.10041  0.0114    1.0000        0.0451 3
#> CV:mclust 36 0.06862  0.0207    0.0627        0.0134 4
#> CV:mclust 51 0.00667  0.1703    0.0392        0.0352 5
#> CV:mclust 51 0.02880  0.0283    0.2561        0.1261 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "NMF"]
# you can also extract it by
# res = res_list["CV:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'CV' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.791           0.854       0.941         0.5023 0.493   0.493
#> 3 3 0.402           0.606       0.783         0.3009 0.794   0.608
#> 4 4 0.391           0.456       0.678         0.1128 0.907   0.748
#> 5 5 0.452           0.349       0.630         0.0609 0.935   0.792
#> 6 6 0.476           0.342       0.564         0.0428 0.924   0.742

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000     0.9393 1.000 0.000
#> GSM701769     1  0.0000     0.9393 1.000 0.000
#> GSM701768     1  0.0672     0.9358 0.992 0.008
#> GSM701767     2  0.4161     0.8605 0.084 0.916
#> GSM701766     2  0.2236     0.9060 0.036 0.964
#> GSM701806     1  0.0000     0.9393 1.000 0.000
#> GSM701805     1  0.0000     0.9393 1.000 0.000
#> GSM701804     1  0.5294     0.8448 0.880 0.120
#> GSM701803     1  0.0000     0.9393 1.000 0.000
#> GSM701775     1  0.0000     0.9393 1.000 0.000
#> GSM701774     1  0.0938     0.9340 0.988 0.012
#> GSM701773     2  0.0000     0.9282 0.000 1.000
#> GSM701772     2  0.9552     0.3844 0.376 0.624
#> GSM701771     1  0.0000     0.9393 1.000 0.000
#> GSM701810     1  0.0000     0.9393 1.000 0.000
#> GSM701809     1  0.8144     0.6569 0.748 0.252
#> GSM701808     1  0.0000     0.9393 1.000 0.000
#> GSM701807     1  0.0000     0.9393 1.000 0.000
#> GSM701780     1  0.0000     0.9393 1.000 0.000
#> GSM701779     2  0.0000     0.9282 0.000 1.000
#> GSM701778     2  0.0000     0.9282 0.000 1.000
#> GSM701777     2  0.0000     0.9282 0.000 1.000
#> GSM701776     1  0.0000     0.9393 1.000 0.000
#> GSM701816     1  0.9944     0.1685 0.544 0.456
#> GSM701815     2  0.0000     0.9282 0.000 1.000
#> GSM701814     2  0.0000     0.9282 0.000 1.000
#> GSM701813     1  0.9209     0.4989 0.664 0.336
#> GSM701812     1  0.8861     0.5805 0.696 0.304
#> GSM701811     1  0.0000     0.9393 1.000 0.000
#> GSM701786     1  0.0000     0.9393 1.000 0.000
#> GSM701785     2  0.1633     0.9136 0.024 0.976
#> GSM701784     2  0.0000     0.9282 0.000 1.000
#> GSM701783     1  0.0000     0.9393 1.000 0.000
#> GSM701782     2  0.0000     0.9282 0.000 1.000
#> GSM701781     2  0.9988     0.0731 0.480 0.520
#> GSM701822     2  0.0000     0.9282 0.000 1.000
#> GSM701821     2  0.0000     0.9282 0.000 1.000
#> GSM701820     1  0.2043     0.9222 0.968 0.032
#> GSM701819     1  0.0376     0.9376 0.996 0.004
#> GSM701818     1  0.0000     0.9393 1.000 0.000
#> GSM701817     1  0.3733     0.8899 0.928 0.072
#> GSM701790     1  0.0000     0.9393 1.000 0.000
#> GSM701789     1  0.0000     0.9393 1.000 0.000
#> GSM701788     1  0.0000     0.9393 1.000 0.000
#> GSM701787     2  0.3431     0.8823 0.064 0.936
#> GSM701824     1  0.0000     0.9393 1.000 0.000
#> GSM701823     2  0.0000     0.9282 0.000 1.000
#> GSM701791     2  0.0000     0.9282 0.000 1.000
#> GSM701793     1  0.0000     0.9393 1.000 0.000
#> GSM701792     2  0.9993     0.0504 0.484 0.516
#> GSM701825     1  0.3274     0.9021 0.940 0.060
#> GSM701827     2  0.0000     0.9282 0.000 1.000
#> GSM701826     2  0.0000     0.9282 0.000 1.000
#> GSM701797     2  0.9754     0.3135 0.408 0.592
#> GSM701796     1  0.0000     0.9393 1.000 0.000
#> GSM701795     2  0.0000     0.9282 0.000 1.000
#> GSM701794     2  0.0000     0.9282 0.000 1.000
#> GSM701831     2  0.0000     0.9282 0.000 1.000
#> GSM701830     2  0.0000     0.9282 0.000 1.000
#> GSM701829     2  0.2236     0.9057 0.036 0.964
#> GSM701828     2  0.0000     0.9282 0.000 1.000
#> GSM701798     2  0.0000     0.9282 0.000 1.000
#> GSM701802     2  0.0000     0.9282 0.000 1.000
#> GSM701801     1  0.3114     0.9049 0.944 0.056
#> GSM701800     1  0.6973     0.7595 0.812 0.188
#> GSM701799     2  0.0000     0.9282 0.000 1.000
#> GSM701832     2  0.0000     0.9282 0.000 1.000
#> GSM701835     2  0.8207     0.6516 0.256 0.744
#> GSM701834     2  0.0000     0.9282 0.000 1.000
#> GSM701833     2  0.0000     0.9282 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n age(p) time(p) tissue(p) individual(p) k
#> CV:NMF 64 0.0251 0.04233   1.00000        0.0151 2
#> CV:NMF 49 0.0162 0.04951   0.00285        0.0051 3
#> CV:NMF 35 1.0000 0.00569   0.16246        0.1378 4
#> CV:NMF 24 1.0000 0.00153   0.19492        0.3014 5
#> CV:NMF 24 1.0000 0.00153   0.19492        0.3014 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "hclust"]
# you can also extract it by
# res = res_list["MAD:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.426           0.757       0.880         0.4633 0.508   0.508
#> 3 3 0.276           0.480       0.745         0.2872 0.928   0.864
#> 4 4 0.312           0.423       0.689         0.0954 0.904   0.806
#> 5 5 0.359           0.433       0.637         0.0858 0.839   0.632
#> 6 6 0.404           0.372       0.627         0.0641 0.861   0.588

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0376     0.8858 0.996 0.004
#> GSM701769     1  0.1414     0.8874 0.980 0.020
#> GSM701768     1  0.2423     0.8837 0.960 0.040
#> GSM701767     1  0.8608     0.5917 0.716 0.284
#> GSM701766     2  0.9909     0.3640 0.444 0.556
#> GSM701806     1  0.0000     0.8847 1.000 0.000
#> GSM701805     1  0.0000     0.8847 1.000 0.000
#> GSM701804     1  0.2603     0.8805 0.956 0.044
#> GSM701803     1  0.0672     0.8868 0.992 0.008
#> GSM701775     1  0.2603     0.8811 0.956 0.044
#> GSM701774     1  0.4022     0.8582 0.920 0.080
#> GSM701773     2  0.0672     0.8145 0.008 0.992
#> GSM701772     2  0.9922     0.3503 0.448 0.552
#> GSM701771     1  0.0000     0.8847 1.000 0.000
#> GSM701810     1  0.0938     0.8874 0.988 0.012
#> GSM701809     1  0.5178     0.8311 0.884 0.116
#> GSM701808     1  0.0938     0.8869 0.988 0.012
#> GSM701807     1  0.0000     0.8847 1.000 0.000
#> GSM701780     1  0.3114     0.8762 0.944 0.056
#> GSM701779     2  0.0000     0.8096 0.000 1.000
#> GSM701778     2  0.5059     0.8145 0.112 0.888
#> GSM701777     2  0.9491     0.5543 0.368 0.632
#> GSM701776     1  0.0000     0.8847 1.000 0.000
#> GSM701816     1  0.4690     0.8441 0.900 0.100
#> GSM701815     2  0.7139     0.7592 0.196 0.804
#> GSM701814     2  0.2948     0.8280 0.052 0.948
#> GSM701813     1  0.7219     0.7304 0.800 0.200
#> GSM701812     1  0.3114     0.8751 0.944 0.056
#> GSM701811     1  0.2423     0.8828 0.960 0.040
#> GSM701786     1  0.0000     0.8847 1.000 0.000
#> GSM701785     2  0.7056     0.7682 0.192 0.808
#> GSM701784     2  0.8267     0.7100 0.260 0.740
#> GSM701783     1  0.0000     0.8847 1.000 0.000
#> GSM701782     2  0.9491     0.5489 0.368 0.632
#> GSM701781     1  0.9661     0.2923 0.608 0.392
#> GSM701822     2  0.1843     0.8231 0.028 0.972
#> GSM701821     2  0.7815     0.7380 0.232 0.768
#> GSM701820     1  0.4815     0.8420 0.896 0.104
#> GSM701819     1  0.0376     0.8858 0.996 0.004
#> GSM701818     1  0.0000     0.8847 1.000 0.000
#> GSM701817     1  0.2423     0.8826 0.960 0.040
#> GSM701790     1  0.1184     0.8877 0.984 0.016
#> GSM701789     1  0.1184     0.8877 0.984 0.016
#> GSM701788     1  0.0000     0.8847 1.000 0.000
#> GSM701787     2  0.9661     0.4900 0.392 0.608
#> GSM701824     1  0.2043     0.8851 0.968 0.032
#> GSM701823     2  0.3114     0.8215 0.056 0.944
#> GSM701791     2  0.1633     0.8217 0.024 0.976
#> GSM701793     1  0.0376     0.8860 0.996 0.004
#> GSM701792     1  0.8555     0.5690 0.720 0.280
#> GSM701825     1  0.1184     0.8873 0.984 0.016
#> GSM701827     2  0.0000     0.8096 0.000 1.000
#> GSM701826     1  0.9909     0.0803 0.556 0.444
#> GSM701797     1  0.9552     0.3376 0.624 0.376
#> GSM701796     1  0.1184     0.8871 0.984 0.016
#> GSM701795     2  0.3274     0.8283 0.060 0.940
#> GSM701794     2  0.0000     0.8096 0.000 1.000
#> GSM701831     2  0.3274     0.8280 0.060 0.940
#> GSM701830     2  0.2423     0.8257 0.040 0.960
#> GSM701829     1  0.9608     0.3115 0.616 0.384
#> GSM701828     2  0.5178     0.8136 0.116 0.884
#> GSM701798     2  0.3584     0.8267 0.068 0.932
#> GSM701802     2  0.9129     0.6200 0.328 0.672
#> GSM701801     1  0.6712     0.7564 0.824 0.176
#> GSM701800     1  0.9393     0.3895 0.644 0.356
#> GSM701799     2  0.1414     0.8197 0.020 0.980
#> GSM701832     2  0.7950     0.7314 0.240 0.760
#> GSM701835     2  0.9988     0.2348 0.480 0.520
#> GSM701834     2  0.2948     0.8282 0.052 0.948
#> GSM701833     2  0.0000     0.8096 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n age(p)  time(p) tissue(p) individual(p) k
#> MAD:hclust 61 0.0286 0.004786    1.0000       0.00896 2
#> MAD:hclust 40 0.5136 0.000485    0.1517       0.11598 3
#> MAD:hclust 25     NA       NA        NA            NA 4
#> MAD:hclust 29 0.1757 0.102752    0.0316       0.20015 5
#> MAD:hclust 28 0.3504 0.036708    0.0499       0.15650 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:kmeans**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "kmeans"]
# you can also extract it by
# res = res_list["MAD:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.969           0.941       0.973         0.4939 0.503   0.503
#> 3 3 0.495           0.588       0.781         0.3429 0.748   0.533
#> 4 4 0.580           0.719       0.820         0.1244 0.832   0.543
#> 5 5 0.617           0.593       0.749         0.0590 0.948   0.798
#> 6 6 0.632           0.454       0.692         0.0358 0.958   0.817

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.982 1.000 0.000
#> GSM701769     1  0.0000      0.982 1.000 0.000
#> GSM701768     1  0.0000      0.982 1.000 0.000
#> GSM701767     1  0.0000      0.982 1.000 0.000
#> GSM701766     2  0.7219      0.768 0.200 0.800
#> GSM701806     1  0.0000      0.982 1.000 0.000
#> GSM701805     1  0.0672      0.982 0.992 0.008
#> GSM701804     1  0.0672      0.982 0.992 0.008
#> GSM701803     1  0.0672      0.982 0.992 0.008
#> GSM701775     1  0.0000      0.982 1.000 0.000
#> GSM701774     1  0.0000      0.982 1.000 0.000
#> GSM701773     2  0.0672      0.957 0.008 0.992
#> GSM701772     1  0.0376      0.980 0.996 0.004
#> GSM701771     1  0.0000      0.982 1.000 0.000
#> GSM701810     1  0.0672      0.982 0.992 0.008
#> GSM701809     1  0.0938      0.980 0.988 0.012
#> GSM701808     1  0.0672      0.982 0.992 0.008
#> GSM701807     1  0.0672      0.982 0.992 0.008
#> GSM701780     1  0.0000      0.982 1.000 0.000
#> GSM701779     2  0.0672      0.957 0.008 0.992
#> GSM701778     2  0.0672      0.957 0.008 0.992
#> GSM701777     2  0.8207      0.682 0.256 0.744
#> GSM701776     1  0.0672      0.982 0.992 0.008
#> GSM701816     1  0.0672      0.982 0.992 0.008
#> GSM701815     2  0.0000      0.957 0.000 1.000
#> GSM701814     2  0.0000      0.957 0.000 1.000
#> GSM701813     1  0.0672      0.982 0.992 0.008
#> GSM701812     1  0.0672      0.982 0.992 0.008
#> GSM701811     1  0.0000      0.982 1.000 0.000
#> GSM701786     1  0.0000      0.982 1.000 0.000
#> GSM701785     2  0.0672      0.957 0.008 0.992
#> GSM701784     2  0.0672      0.957 0.008 0.992
#> GSM701783     1  0.0000      0.982 1.000 0.000
#> GSM701782     2  0.0672      0.957 0.008 0.992
#> GSM701781     1  0.9775      0.246 0.588 0.412
#> GSM701822     2  0.0000      0.957 0.000 1.000
#> GSM701821     2  0.0000      0.957 0.000 1.000
#> GSM701820     1  0.0672      0.982 0.992 0.008
#> GSM701819     1  0.0672      0.982 0.992 0.008
#> GSM701818     1  0.0672      0.982 0.992 0.008
#> GSM701817     1  0.0672      0.982 0.992 0.008
#> GSM701790     1  0.0000      0.982 1.000 0.000
#> GSM701789     1  0.0000      0.982 1.000 0.000
#> GSM701788     1  0.0000      0.982 1.000 0.000
#> GSM701787     2  0.3114      0.922 0.056 0.944
#> GSM701824     1  0.0672      0.982 0.992 0.008
#> GSM701823     2  0.0000      0.957 0.000 1.000
#> GSM701791     2  0.0672      0.957 0.008 0.992
#> GSM701793     1  0.0000      0.982 1.000 0.000
#> GSM701792     1  0.0000      0.982 1.000 0.000
#> GSM701825     1  0.0672      0.982 0.992 0.008
#> GSM701827     2  0.0000      0.957 0.000 1.000
#> GSM701826     2  0.0000      0.957 0.000 1.000
#> GSM701797     1  0.4298      0.889 0.912 0.088
#> GSM701796     1  0.0000      0.982 1.000 0.000
#> GSM701795     2  0.0672      0.957 0.008 0.992
#> GSM701794     2  0.0672      0.957 0.008 0.992
#> GSM701831     2  0.0000      0.957 0.000 1.000
#> GSM701830     2  0.0000      0.957 0.000 1.000
#> GSM701829     2  0.7883      0.701 0.236 0.764
#> GSM701828     2  0.0000      0.957 0.000 1.000
#> GSM701798     2  0.0672      0.957 0.008 0.992
#> GSM701802     2  0.0672      0.957 0.008 0.992
#> GSM701801     1  0.0000      0.982 1.000 0.000
#> GSM701800     1  0.0376      0.982 0.996 0.004
#> GSM701799     2  0.0672      0.957 0.008 0.992
#> GSM701832     2  0.0000      0.957 0.000 1.000
#> GSM701835     2  0.9608      0.399 0.384 0.616
#> GSM701834     2  0.0000      0.957 0.000 1.000
#> GSM701833     2  0.0000      0.957 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n   age(p) time(p) tissue(p) individual(p) k
#> MAD:kmeans 68 0.023128  0.0398  1.00e+00       0.00800 2
#> MAD:kmeans 51 0.138650  0.0162  1.77e-01       0.02620 3
#> MAD:kmeans 63 0.000771  0.0914  4.72e-06       0.00426 4
#> MAD:kmeans 54 0.026719  0.0807  3.18e-04       0.02266 5
#> MAD:kmeans 38 0.131373  0.2798  3.33e-02       0.03806 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "skmeans"]
# you can also extract it by
# res = res_list["MAD:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.741           0.911       0.958         0.5048 0.496   0.496
#> 3 3 0.386           0.505       0.720         0.3064 0.865   0.734
#> 4 4 0.390           0.370       0.617         0.1151 0.863   0.665
#> 5 5 0.388           0.332       0.563         0.0624 0.832   0.548
#> 6 6 0.438           0.280       0.525         0.0449 0.893   0.647

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.948 1.000 0.000
#> GSM701769     1  0.0000      0.948 1.000 0.000
#> GSM701768     1  0.0000      0.948 1.000 0.000
#> GSM701767     1  0.6887      0.791 0.816 0.184
#> GSM701766     2  0.3879      0.903 0.076 0.924
#> GSM701806     1  0.0000      0.948 1.000 0.000
#> GSM701805     1  0.0000      0.948 1.000 0.000
#> GSM701804     1  0.0938      0.943 0.988 0.012
#> GSM701803     1  0.0000      0.948 1.000 0.000
#> GSM701775     1  0.0000      0.948 1.000 0.000
#> GSM701774     1  0.0938      0.943 0.988 0.012
#> GSM701773     2  0.0000      0.962 0.000 1.000
#> GSM701772     1  0.8081      0.696 0.752 0.248
#> GSM701771     1  0.0000      0.948 1.000 0.000
#> GSM701810     1  0.0000      0.948 1.000 0.000
#> GSM701809     1  0.7376      0.763 0.792 0.208
#> GSM701808     1  0.0000      0.948 1.000 0.000
#> GSM701807     1  0.0000      0.948 1.000 0.000
#> GSM701780     1  0.0000      0.948 1.000 0.000
#> GSM701779     2  0.0000      0.962 0.000 1.000
#> GSM701778     2  0.0000      0.962 0.000 1.000
#> GSM701777     2  0.5178      0.864 0.116 0.884
#> GSM701776     1  0.0000      0.948 1.000 0.000
#> GSM701816     1  0.4939      0.874 0.892 0.108
#> GSM701815     2  0.0000      0.962 0.000 1.000
#> GSM701814     2  0.0000      0.962 0.000 1.000
#> GSM701813     1  0.6801      0.795 0.820 0.180
#> GSM701812     1  0.1184      0.941 0.984 0.016
#> GSM701811     1  0.0000      0.948 1.000 0.000
#> GSM701786     1  0.0000      0.948 1.000 0.000
#> GSM701785     2  0.0000      0.962 0.000 1.000
#> GSM701784     2  0.0000      0.962 0.000 1.000
#> GSM701783     1  0.0000      0.948 1.000 0.000
#> GSM701782     2  0.0376      0.959 0.004 0.996
#> GSM701781     2  0.7950      0.689 0.240 0.760
#> GSM701822     2  0.0000      0.962 0.000 1.000
#> GSM701821     2  0.0000      0.962 0.000 1.000
#> GSM701820     1  0.4161      0.895 0.916 0.084
#> GSM701819     1  0.0000      0.948 1.000 0.000
#> GSM701818     1  0.0000      0.948 1.000 0.000
#> GSM701817     1  0.2948      0.919 0.948 0.052
#> GSM701790     1  0.0000      0.948 1.000 0.000
#> GSM701789     1  0.0000      0.948 1.000 0.000
#> GSM701788     1  0.0000      0.948 1.000 0.000
#> GSM701787     2  0.1843      0.943 0.028 0.972
#> GSM701824     1  0.0000      0.948 1.000 0.000
#> GSM701823     2  0.0000      0.962 0.000 1.000
#> GSM701791     2  0.0000      0.962 0.000 1.000
#> GSM701793     1  0.0000      0.948 1.000 0.000
#> GSM701792     1  0.8763      0.607 0.704 0.296
#> GSM701825     1  0.0000      0.948 1.000 0.000
#> GSM701827     2  0.0000      0.962 0.000 1.000
#> GSM701826     2  0.0000      0.962 0.000 1.000
#> GSM701797     2  0.9635      0.355 0.388 0.612
#> GSM701796     1  0.0000      0.948 1.000 0.000
#> GSM701795     2  0.0000      0.962 0.000 1.000
#> GSM701794     2  0.0000      0.962 0.000 1.000
#> GSM701831     2  0.0000      0.962 0.000 1.000
#> GSM701830     2  0.0000      0.962 0.000 1.000
#> GSM701829     2  0.5946      0.829 0.144 0.856
#> GSM701828     2  0.0000      0.962 0.000 1.000
#> GSM701798     2  0.0000      0.962 0.000 1.000
#> GSM701802     2  0.0000      0.962 0.000 1.000
#> GSM701801     1  0.4939      0.874 0.892 0.108
#> GSM701800     1  0.9170      0.535 0.668 0.332
#> GSM701799     2  0.0000      0.962 0.000 1.000
#> GSM701832     2  0.0000      0.962 0.000 1.000
#> GSM701835     2  0.5178      0.862 0.116 0.884
#> GSM701834     2  0.0000      0.962 0.000 1.000
#> GSM701833     2  0.0000      0.962 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>              n age(p) time(p) tissue(p) individual(p) k
#> MAD:skmeans 69 0.0153  0.0174     1.000        0.0123 2
#> MAD:skmeans 35 0.0211  0.0432     0.552        0.0269 3
#> MAD:skmeans 20     NA      NA        NA            NA 4
#> MAD:skmeans 18     NA      NA        NA            NA 5
#> MAD:skmeans 13     NA      NA        NA            NA 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "pam"]
# you can also extract it by
# res = res_list["MAD:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.780           0.880       0.947         0.5043 0.496   0.496
#> 3 3 0.402           0.322       0.576         0.3125 0.754   0.543
#> 4 4 0.519           0.609       0.794         0.1264 0.757   0.412
#> 5 5 0.533           0.512       0.727         0.0323 0.964   0.858
#> 6 6 0.538           0.496       0.722         0.0169 0.969   0.865

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000     0.9562 1.000 0.000
#> GSM701769     1  0.0376     0.9558 0.996 0.004
#> GSM701768     1  0.0672     0.9545 0.992 0.008
#> GSM701767     2  0.9998     0.0896 0.492 0.508
#> GSM701766     2  0.6623     0.7841 0.172 0.828
#> GSM701806     1  0.0000     0.9562 1.000 0.000
#> GSM701805     1  0.0000     0.9562 1.000 0.000
#> GSM701804     1  0.1633     0.9499 0.976 0.024
#> GSM701803     1  0.1633     0.9493 0.976 0.024
#> GSM701775     1  0.0000     0.9562 1.000 0.000
#> GSM701774     1  0.4562     0.8861 0.904 0.096
#> GSM701773     2  0.0000     0.9296 0.000 1.000
#> GSM701772     2  0.9044     0.5676 0.320 0.680
#> GSM701771     1  0.0000     0.9562 1.000 0.000
#> GSM701810     1  0.0000     0.9562 1.000 0.000
#> GSM701809     2  0.9896     0.2049 0.440 0.560
#> GSM701808     1  0.0000     0.9562 1.000 0.000
#> GSM701807     1  0.0000     0.9562 1.000 0.000
#> GSM701780     1  0.6531     0.7968 0.832 0.168
#> GSM701779     2  0.0000     0.9296 0.000 1.000
#> GSM701778     2  0.0000     0.9296 0.000 1.000
#> GSM701777     2  0.1414     0.9195 0.020 0.980
#> GSM701776     1  0.0000     0.9562 1.000 0.000
#> GSM701816     1  0.4022     0.9068 0.920 0.080
#> GSM701815     2  0.0376     0.9277 0.004 0.996
#> GSM701814     2  0.0000     0.9296 0.000 1.000
#> GSM701813     1  0.8763     0.5860 0.704 0.296
#> GSM701812     1  0.4161     0.9044 0.916 0.084
#> GSM701811     1  0.0672     0.9555 0.992 0.008
#> GSM701786     1  0.0672     0.9553 0.992 0.008
#> GSM701785     2  0.0000     0.9296 0.000 1.000
#> GSM701784     2  0.1843     0.9151 0.028 0.972
#> GSM701783     1  0.0000     0.9562 1.000 0.000
#> GSM701782     2  0.0000     0.9296 0.000 1.000
#> GSM701781     2  0.0000     0.9296 0.000 1.000
#> GSM701822     2  0.0000     0.9296 0.000 1.000
#> GSM701821     2  0.0000     0.9296 0.000 1.000
#> GSM701820     1  0.2236     0.9436 0.964 0.036
#> GSM701819     1  0.2236     0.9433 0.964 0.036
#> GSM701818     1  0.1843     0.9477 0.972 0.028
#> GSM701817     1  0.2423     0.9414 0.960 0.040
#> GSM701790     1  0.8955     0.5100 0.688 0.312
#> GSM701789     1  0.0000     0.9562 1.000 0.000
#> GSM701788     1  0.0000     0.9562 1.000 0.000
#> GSM701787     2  0.2778     0.9034 0.048 0.952
#> GSM701824     1  0.0000     0.9562 1.000 0.000
#> GSM701823     2  0.0000     0.9296 0.000 1.000
#> GSM701791     2  0.0000     0.9296 0.000 1.000
#> GSM701793     1  0.0000     0.9562 1.000 0.000
#> GSM701792     2  0.9580     0.4420 0.380 0.620
#> GSM701825     1  0.0376     0.9558 0.996 0.004
#> GSM701827     2  0.0000     0.9296 0.000 1.000
#> GSM701826     2  0.0376     0.9278 0.004 0.996
#> GSM701797     2  0.0000     0.9296 0.000 1.000
#> GSM701796     1  0.0672     0.9552 0.992 0.008
#> GSM701795     2  0.0000     0.9296 0.000 1.000
#> GSM701794     2  0.0000     0.9296 0.000 1.000
#> GSM701831     2  0.0000     0.9296 0.000 1.000
#> GSM701830     2  0.0000     0.9296 0.000 1.000
#> GSM701829     2  0.4431     0.8637 0.092 0.908
#> GSM701828     2  0.0000     0.9296 0.000 1.000
#> GSM701798     2  0.0000     0.9296 0.000 1.000
#> GSM701802     2  0.0000     0.9296 0.000 1.000
#> GSM701801     2  0.7674     0.7123 0.224 0.776
#> GSM701800     2  0.5842     0.8204 0.140 0.860
#> GSM701799     2  0.0000     0.9296 0.000 1.000
#> GSM701832     2  0.0000     0.9296 0.000 1.000
#> GSM701835     2  0.4815     0.8531 0.104 0.896
#> GSM701834     2  0.0000     0.9296 0.000 1.000
#> GSM701833     2  0.0000     0.9296 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n  age(p) time(p) tissue(p) individual(p) k
#> MAD:pam 67 0.00359 0.01601   0.55142        0.0162 2
#> MAD:pam 32 0.13994 0.09509   0.62306        0.1764 3
#> MAD:pam 56 0.08714 0.00275   0.00125        0.0571 4
#> MAD:pam 45 0.59875 0.01587   0.00858        0.3673 5
#> MAD:pam 42 0.47529 0.00502   0.02569        0.2745 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "mclust"]
# you can also extract it by
# res = res_list["MAD:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.351           0.701       0.798         0.4624 0.499   0.499
#> 3 3 0.386           0.430       0.665         0.3273 0.855   0.710
#> 4 4 0.425           0.544       0.669         0.1686 0.833   0.564
#> 5 5 0.518           0.568       0.681         0.0819 0.882   0.581
#> 6 6 0.621           0.567       0.737         0.0527 0.945   0.740

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.809 1.000 0.000
#> GSM701769     1  0.0000      0.809 1.000 0.000
#> GSM701768     1  0.0000      0.809 1.000 0.000
#> GSM701767     1  0.0376      0.806 0.996 0.004
#> GSM701766     1  0.1633      0.801 0.976 0.024
#> GSM701806     1  0.6148      0.571 0.848 0.152
#> GSM701805     2  0.9815      0.753 0.420 0.580
#> GSM701804     2  0.9815      0.753 0.420 0.580
#> GSM701803     2  0.9815      0.753 0.420 0.580
#> GSM701775     1  0.0000      0.809 1.000 0.000
#> GSM701774     1  0.0672      0.803 0.992 0.008
#> GSM701773     1  0.9815      0.497 0.580 0.420
#> GSM701772     1  0.0000      0.809 1.000 0.000
#> GSM701771     1  0.0000      0.809 1.000 0.000
#> GSM701810     2  0.9815      0.753 0.420 0.580
#> GSM701809     2  0.9815      0.753 0.420 0.580
#> GSM701808     2  0.9815      0.753 0.420 0.580
#> GSM701807     2  0.9815      0.753 0.420 0.580
#> GSM701780     1  0.0672      0.803 0.992 0.008
#> GSM701779     1  0.9866      0.485 0.568 0.432
#> GSM701778     1  0.9491      0.537 0.632 0.368
#> GSM701777     1  0.2043      0.797 0.968 0.032
#> GSM701776     2  0.9815      0.753 0.420 0.580
#> GSM701816     2  0.9833      0.748 0.424 0.576
#> GSM701815     2  0.8144      0.679 0.252 0.748
#> GSM701814     2  0.0000      0.591 0.000 1.000
#> GSM701813     2  0.9815      0.753 0.420 0.580
#> GSM701812     2  0.9833      0.748 0.424 0.576
#> GSM701811     1  0.0000      0.809 1.000 0.000
#> GSM701786     1  0.0000      0.809 1.000 0.000
#> GSM701785     1  0.7674      0.656 0.776 0.224
#> GSM701784     1  0.6712      0.696 0.824 0.176
#> GSM701783     1  0.0000      0.809 1.000 0.000
#> GSM701782     1  0.5842      0.726 0.860 0.140
#> GSM701781     1  0.6801      0.522 0.820 0.180
#> GSM701822     2  0.0000      0.591 0.000 1.000
#> GSM701821     2  0.3114      0.616 0.056 0.944
#> GSM701820     2  0.9815      0.753 0.420 0.580
#> GSM701819     2  0.9815      0.753 0.420 0.580
#> GSM701818     2  0.9815      0.753 0.420 0.580
#> GSM701817     2  0.9815      0.753 0.420 0.580
#> GSM701790     1  0.0000      0.809 1.000 0.000
#> GSM701789     1  0.0000      0.809 1.000 0.000
#> GSM701788     1  0.0000      0.809 1.000 0.000
#> GSM701787     1  0.1633      0.801 0.976 0.024
#> GSM701824     2  0.9833      0.749 0.424 0.576
#> GSM701823     2  0.5408      0.654 0.124 0.876
#> GSM701791     1  0.9815      0.497 0.580 0.420
#> GSM701793     1  0.0000      0.809 1.000 0.000
#> GSM701792     1  0.0000      0.809 1.000 0.000
#> GSM701825     2  0.9815      0.753 0.420 0.580
#> GSM701827     2  0.0938      0.584 0.012 0.988
#> GSM701826     2  0.7299      0.687 0.204 0.796
#> GSM701797     1  0.0000      0.809 1.000 0.000
#> GSM701796     1  0.0000      0.809 1.000 0.000
#> GSM701795     1  0.9815      0.497 0.580 0.420
#> GSM701794     1  0.9815      0.497 0.580 0.420
#> GSM701831     2  0.6973      0.516 0.188 0.812
#> GSM701830     2  0.0000      0.591 0.000 1.000
#> GSM701829     2  0.9754      0.750 0.408 0.592
#> GSM701828     2  0.8499      0.681 0.276 0.724
#> GSM701798     1  0.9661      0.519 0.608 0.392
#> GSM701802     1  0.6531      0.704 0.832 0.168
#> GSM701801     1  0.0000      0.809 1.000 0.000
#> GSM701800     1  0.2423      0.767 0.960 0.040
#> GSM701799     1  0.9815      0.497 0.580 0.420
#> GSM701832     2  0.8861      0.464 0.304 0.696
#> GSM701835     1  0.7219      0.455 0.800 0.200
#> GSM701834     2  0.0000      0.591 0.000 1.000
#> GSM701833     2  0.0000      0.591 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n age(p) time(p) tissue(p) individual(p) k
#> MAD:mclust 62 1.0000  0.4066  6.81e-12       0.61846 2
#> MAD:mclust 37 0.2120  0.0317  3.69e-07       0.44485 3
#> MAD:mclust 51 0.0142  0.2515  1.90e-09       0.08108 4
#> MAD:mclust 48 0.0021  0.3314  3.10e-08       0.00607 5
#> MAD:mclust 50 0.0621  0.1882  2.29e-07       0.10998 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "NMF"]
# you can also extract it by
# res = res_list["MAD:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'MAD' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.798           0.901       0.956         0.5021 0.496   0.496
#> 3 3 0.456           0.688       0.825         0.3278 0.753   0.539
#> 4 4 0.469           0.529       0.709         0.1118 0.975   0.925
#> 5 5 0.469           0.407       0.616         0.0638 0.908   0.714
#> 6 6 0.493           0.362       0.576         0.0408 0.956   0.835

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.938 1.000 0.000
#> GSM701769     1  0.0000      0.938 1.000 0.000
#> GSM701768     1  0.2236      0.920 0.964 0.036
#> GSM701767     1  0.9460      0.481 0.636 0.364
#> GSM701766     2  0.2948      0.925 0.052 0.948
#> GSM701806     1  0.0000      0.938 1.000 0.000
#> GSM701805     1  0.0000      0.938 1.000 0.000
#> GSM701804     1  0.0376      0.937 0.996 0.004
#> GSM701803     1  0.0938      0.934 0.988 0.012
#> GSM701775     1  0.0000      0.938 1.000 0.000
#> GSM701774     1  0.0000      0.938 1.000 0.000
#> GSM701773     2  0.0000      0.969 0.000 1.000
#> GSM701772     1  0.9933      0.237 0.548 0.452
#> GSM701771     1  0.0000      0.938 1.000 0.000
#> GSM701810     1  0.0000      0.938 1.000 0.000
#> GSM701809     1  0.7376      0.754 0.792 0.208
#> GSM701808     1  0.0000      0.938 1.000 0.000
#> GSM701807     1  0.0000      0.938 1.000 0.000
#> GSM701780     1  0.0000      0.938 1.000 0.000
#> GSM701779     2  0.0000      0.969 0.000 1.000
#> GSM701778     2  0.0000      0.969 0.000 1.000
#> GSM701777     2  0.1184      0.959 0.016 0.984
#> GSM701776     1  0.0000      0.938 1.000 0.000
#> GSM701816     1  0.8207      0.690 0.744 0.256
#> GSM701815     2  0.0000      0.969 0.000 1.000
#> GSM701814     2  0.0000      0.969 0.000 1.000
#> GSM701813     1  0.9129      0.561 0.672 0.328
#> GSM701812     1  0.0672      0.935 0.992 0.008
#> GSM701811     1  0.0000      0.938 1.000 0.000
#> GSM701786     1  0.0000      0.938 1.000 0.000
#> GSM701785     2  0.0000      0.969 0.000 1.000
#> GSM701784     2  0.0000      0.969 0.000 1.000
#> GSM701783     1  0.0000      0.938 1.000 0.000
#> GSM701782     2  0.0376      0.967 0.004 0.996
#> GSM701781     2  0.9248      0.440 0.340 0.660
#> GSM701822     2  0.0000      0.969 0.000 1.000
#> GSM701821     2  0.0376      0.967 0.004 0.996
#> GSM701820     1  0.5059      0.864 0.888 0.112
#> GSM701819     1  0.0000      0.938 1.000 0.000
#> GSM701818     1  0.0000      0.938 1.000 0.000
#> GSM701817     1  0.2043      0.923 0.968 0.032
#> GSM701790     1  0.0000      0.938 1.000 0.000
#> GSM701789     1  0.0000      0.938 1.000 0.000
#> GSM701788     1  0.0000      0.938 1.000 0.000
#> GSM701787     2  0.1414      0.956 0.020 0.980
#> GSM701824     1  0.0000      0.938 1.000 0.000
#> GSM701823     2  0.0000      0.969 0.000 1.000
#> GSM701791     2  0.0000      0.969 0.000 1.000
#> GSM701793     1  0.0000      0.938 1.000 0.000
#> GSM701792     1  0.4815      0.871 0.896 0.104
#> GSM701825     1  0.5178      0.859 0.884 0.116
#> GSM701827     2  0.0000      0.969 0.000 1.000
#> GSM701826     2  0.0000      0.969 0.000 1.000
#> GSM701797     2  0.9427      0.396 0.360 0.640
#> GSM701796     1  0.0000      0.938 1.000 0.000
#> GSM701795     2  0.0000      0.969 0.000 1.000
#> GSM701794     2  0.0000      0.969 0.000 1.000
#> GSM701831     2  0.0000      0.969 0.000 1.000
#> GSM701830     2  0.0000      0.969 0.000 1.000
#> GSM701829     2  0.1184      0.959 0.016 0.984
#> GSM701828     2  0.0376      0.967 0.004 0.996
#> GSM701798     2  0.0000      0.969 0.000 1.000
#> GSM701802     2  0.0000      0.969 0.000 1.000
#> GSM701801     1  0.1414      0.929 0.980 0.020
#> GSM701800     1  0.6148      0.823 0.848 0.152
#> GSM701799     2  0.0000      0.969 0.000 1.000
#> GSM701832     2  0.0000      0.969 0.000 1.000
#> GSM701835     2  0.3584      0.908 0.068 0.932
#> GSM701834     2  0.0000      0.969 0.000 1.000
#> GSM701833     2  0.0000      0.969 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n  age(p) time(p) tissue(p) individual(p) k
#> MAD:NMF 66 0.01697 0.03921  1.00e+00       0.00813 2
#> MAD:NMF 60 0.00107 0.05401  9.10e-07       0.00889 3
#> MAD:NMF 46 0.00224 0.00958  1.22e-04       0.04438 4
#> MAD:NMF 27 0.12800 0.03622  3.35e-02       0.26074 5
#> MAD:NMF 19 1.00000 0.00818  3.56e-01       0.22600 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "hclust"]
# you can also extract it by
# res = res_list["ATC:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 5.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.573           0.869       0.896         0.2698 0.658   0.658
#> 3 3 0.249           0.529       0.665         1.0185 0.699   0.543
#> 4 4 0.307           0.684       0.720         0.1634 0.856   0.676
#> 5 5 0.517           0.739       0.799         0.1374 0.897   0.736
#> 6 6 0.616           0.624       0.799         0.0688 0.938   0.783

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 5

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.940 1.000 0.000
#> GSM701769     1  0.0000      0.940 1.000 0.000
#> GSM701768     1  0.4690      0.849 0.900 0.100
#> GSM701767     1  0.2778      0.913 0.952 0.048
#> GSM701766     1  0.2236      0.922 0.964 0.036
#> GSM701806     1  0.0000      0.940 1.000 0.000
#> GSM701805     1  0.0000      0.940 1.000 0.000
#> GSM701804     1  0.3274      0.904 0.940 0.060
#> GSM701803     1  0.0376      0.938 0.996 0.004
#> GSM701775     1  0.0000      0.940 1.000 0.000
#> GSM701774     1  0.0672      0.937 0.992 0.008
#> GSM701773     2  0.9460      0.868 0.364 0.636
#> GSM701772     1  0.2236      0.922 0.964 0.036
#> GSM701771     1  0.0000      0.940 1.000 0.000
#> GSM701810     1  0.0000      0.940 1.000 0.000
#> GSM701809     1  0.3431      0.898 0.936 0.064
#> GSM701808     1  0.0000      0.940 1.000 0.000
#> GSM701807     1  0.0000      0.940 1.000 0.000
#> GSM701780     1  0.0000      0.940 1.000 0.000
#> GSM701779     2  0.0000      0.600 0.000 1.000
#> GSM701778     1  0.0376      0.938 0.996 0.004
#> GSM701777     1  0.2236      0.922 0.964 0.036
#> GSM701776     1  0.0000      0.940 1.000 0.000
#> GSM701816     1  0.1184      0.933 0.984 0.016
#> GSM701815     2  0.9686      0.843 0.396 0.604
#> GSM701814     2  0.9686      0.843 0.396 0.604
#> GSM701813     1  0.0000      0.940 1.000 0.000
#> GSM701812     1  0.0000      0.940 1.000 0.000
#> GSM701811     1  0.0000      0.940 1.000 0.000
#> GSM701786     1  0.0000      0.940 1.000 0.000
#> GSM701785     1  0.1633      0.930 0.976 0.024
#> GSM701784     1  0.2423      0.919 0.960 0.040
#> GSM701783     1  0.0000      0.940 1.000 0.000
#> GSM701782     1  0.0376      0.938 0.996 0.004
#> GSM701781     1  0.0376      0.938 0.996 0.004
#> GSM701822     2  0.9608      0.857 0.384 0.616
#> GSM701821     1  0.0376      0.938 0.996 0.004
#> GSM701820     1  0.3431      0.898 0.936 0.064
#> GSM701819     1  0.0000      0.940 1.000 0.000
#> GSM701818     1  0.0000      0.940 1.000 0.000
#> GSM701817     1  0.0000      0.940 1.000 0.000
#> GSM701790     1  0.4431      0.858 0.908 0.092
#> GSM701789     1  0.0000      0.940 1.000 0.000
#> GSM701788     1  0.0000      0.940 1.000 0.000
#> GSM701787     1  0.6343      0.770 0.840 0.160
#> GSM701824     1  0.4431      0.858 0.908 0.092
#> GSM701823     2  0.9393      0.866 0.356 0.644
#> GSM701791     2  0.9522      0.813 0.372 0.628
#> GSM701793     1  0.4431      0.858 0.908 0.092
#> GSM701792     1  0.5737      0.807 0.864 0.136
#> GSM701825     1  0.4431      0.858 0.908 0.092
#> GSM701827     2  0.0000      0.600 0.000 1.000
#> GSM701826     1  0.6247      0.777 0.844 0.156
#> GSM701797     1  0.0376      0.938 0.996 0.004
#> GSM701796     1  0.0000      0.940 1.000 0.000
#> GSM701795     2  0.9491      0.867 0.368 0.632
#> GSM701794     2  0.9460      0.868 0.364 0.636
#> GSM701831     1  0.0376      0.938 0.996 0.004
#> GSM701830     2  0.9170      0.846 0.332 0.668
#> GSM701829     1  0.9635     -0.268 0.612 0.388
#> GSM701828     1  0.7745      0.557 0.772 0.228
#> GSM701798     2  0.9710      0.816 0.400 0.600
#> GSM701802     1  0.1184      0.933 0.984 0.016
#> GSM701801     1  0.0000      0.940 1.000 0.000
#> GSM701800     1  0.0000      0.940 1.000 0.000
#> GSM701799     2  0.9460      0.868 0.364 0.636
#> GSM701832     1  0.7745      0.557 0.772 0.228
#> GSM701835     1  0.1414      0.932 0.980 0.020
#> GSM701834     2  0.9661      0.848 0.392 0.608
#> GSM701833     2  0.9170      0.846 0.332 0.668

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n   age(p) time(p) tissue(p) individual(p) k
#> ATC:hclust 69 0.042496 0.32015     0.750       0.02006 2
#> ATC:hclust 55 0.077094 0.00026     0.848       0.06757 3
#> ATC:hclust 58 0.021506 0.08066     0.845       0.00934 4
#> ATC:hclust 67 0.000554 0.00735     0.941       0.00202 5
#> ATC:hclust 54 0.011123 0.01928     0.903       0.01357 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "kmeans"]
# you can also extract it by
# res = res_list["ATC:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.484           0.852       0.908         0.4899 0.503   0.503
#> 3 3 0.547           0.790       0.875         0.3240 0.632   0.400
#> 4 4 0.624           0.725       0.834         0.1415 0.824   0.546
#> 5 5 0.720           0.806       0.854         0.0650 0.903   0.637
#> 6 6 0.753           0.698       0.831         0.0365 0.972   0.867

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.2043      0.901 0.968 0.032
#> GSM701769     1  0.0000      0.902 1.000 0.000
#> GSM701768     2  0.7602      0.786 0.220 0.780
#> GSM701767     2  0.6343      0.845 0.160 0.840
#> GSM701766     1  0.9358      0.565 0.648 0.352
#> GSM701806     1  0.3114      0.896 0.944 0.056
#> GSM701805     1  0.0000      0.902 1.000 0.000
#> GSM701804     2  0.6343      0.832 0.160 0.840
#> GSM701803     1  0.4161      0.875 0.916 0.084
#> GSM701775     1  0.3114      0.896 0.944 0.056
#> GSM701774     1  0.0938      0.903 0.988 0.012
#> GSM701773     2  0.3114      0.887 0.056 0.944
#> GSM701772     1  0.4815      0.867 0.896 0.104
#> GSM701771     1  0.3114      0.896 0.944 0.056
#> GSM701810     1  0.3114      0.896 0.944 0.056
#> GSM701809     2  0.5737      0.846 0.136 0.864
#> GSM701808     1  0.3114      0.896 0.944 0.056
#> GSM701807     1  0.3114      0.896 0.944 0.056
#> GSM701780     1  0.0376      0.902 0.996 0.004
#> GSM701779     2  0.3114      0.887 0.056 0.944
#> GSM701778     1  0.8499      0.673 0.724 0.276
#> GSM701777     1  0.4298      0.881 0.912 0.088
#> GSM701776     1  0.3114      0.896 0.944 0.056
#> GSM701816     1  0.4022      0.884 0.920 0.080
#> GSM701815     2  0.3114      0.887 0.056 0.944
#> GSM701814     2  0.3114      0.887 0.056 0.944
#> GSM701813     1  0.3431      0.885 0.936 0.064
#> GSM701812     1  0.2236      0.901 0.964 0.036
#> GSM701811     1  0.3114      0.896 0.944 0.056
#> GSM701786     1  0.2043      0.901 0.968 0.032
#> GSM701785     1  0.9427      0.480 0.640 0.360
#> GSM701784     2  0.4939      0.856 0.108 0.892
#> GSM701783     1  0.0000      0.902 1.000 0.000
#> GSM701782     1  0.5408      0.849 0.876 0.124
#> GSM701781     1  0.5408      0.849 0.876 0.124
#> GSM701822     2  0.3114      0.887 0.056 0.944
#> GSM701821     1  0.5408      0.849 0.876 0.124
#> GSM701820     2  0.6148      0.838 0.152 0.848
#> GSM701819     1  0.0376      0.902 0.996 0.004
#> GSM701818     1  0.0376      0.902 0.996 0.004
#> GSM701817     1  0.0376      0.902 0.996 0.004
#> GSM701790     2  0.7056      0.808 0.192 0.808
#> GSM701789     1  0.3114      0.896 0.944 0.056
#> GSM701788     1  0.3114      0.896 0.944 0.056
#> GSM701787     2  0.0376      0.885 0.004 0.996
#> GSM701824     2  0.8813      0.674 0.300 0.700
#> GSM701823     2  0.3114      0.887 0.056 0.944
#> GSM701791     2  0.0000      0.886 0.000 1.000
#> GSM701793     2  0.8763      0.681 0.296 0.704
#> GSM701792     2  0.7602      0.786 0.220 0.780
#> GSM701825     2  0.7602      0.786 0.220 0.780
#> GSM701827     2  0.0000      0.886 0.000 1.000
#> GSM701826     2  0.0376      0.885 0.004 0.996
#> GSM701797     1  0.4690      0.866 0.900 0.100
#> GSM701796     1  0.3114      0.896 0.944 0.056
#> GSM701795     2  0.3114      0.887 0.056 0.944
#> GSM701794     2  0.3114      0.887 0.056 0.944
#> GSM701831     1  0.5294      0.852 0.880 0.120
#> GSM701830     2  0.1414      0.887 0.020 0.980
#> GSM701829     1  0.7056      0.780 0.808 0.192
#> GSM701828     2  0.2603      0.883 0.044 0.956
#> GSM701798     2  0.3114      0.887 0.056 0.944
#> GSM701802     1  0.5519      0.846 0.872 0.128
#> GSM701801     1  0.0376      0.902 0.996 0.004
#> GSM701800     1  0.0376      0.902 0.996 0.004
#> GSM701799     2  0.3114      0.887 0.056 0.944
#> GSM701832     2  0.6712      0.824 0.176 0.824
#> GSM701835     1  0.9522      0.522 0.628 0.372
#> GSM701834     2  0.3114      0.887 0.056 0.944
#> GSM701833     2  0.0000      0.886 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n  age(p) time(p) tissue(p) individual(p) k
#> ATC:kmeans 69 0.00375 0.77857     0.575      0.000334 2
#> ATC:kmeans 65 0.07637 0.00842     0.934      0.081670 3
#> ATC:kmeans 61 0.02220 0.01178     0.741      0.006280 4
#> ATC:kmeans 67 0.00243 0.01153     0.935      0.017987 5
#> ATC:kmeans 61 0.00800 0.01462     0.928      0.016455 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "skmeans"]
# you can also extract it by
# res = res_list["ATC:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.691           0.928       0.959         0.5071 0.493   0.493
#> 3 3 0.867           0.879       0.948         0.3319 0.735   0.510
#> 4 4 0.899           0.885       0.949         0.1250 0.801   0.479
#> 5 5 0.800           0.721       0.863         0.0469 0.946   0.788
#> 6 6 0.743           0.652       0.797         0.0307 0.962   0.833

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.955 1.000 0.000
#> GSM701769     1  0.0000      0.955 1.000 0.000
#> GSM701768     2  0.4815      0.897 0.104 0.896
#> GSM701767     2  0.0000      0.957 0.000 1.000
#> GSM701766     2  0.0000      0.957 0.000 1.000
#> GSM701806     1  0.0000      0.955 1.000 0.000
#> GSM701805     1  0.0000      0.955 1.000 0.000
#> GSM701804     2  0.0672      0.954 0.008 0.992
#> GSM701803     1  0.4815      0.908 0.896 0.104
#> GSM701775     1  0.0000      0.955 1.000 0.000
#> GSM701774     1  0.0000      0.955 1.000 0.000
#> GSM701773     2  0.0000      0.957 0.000 1.000
#> GSM701772     2  0.9393      0.539 0.356 0.644
#> GSM701771     1  0.0000      0.955 1.000 0.000
#> GSM701810     1  0.0000      0.955 1.000 0.000
#> GSM701809     2  0.2423      0.937 0.040 0.960
#> GSM701808     1  0.0000      0.955 1.000 0.000
#> GSM701807     1  0.0000      0.955 1.000 0.000
#> GSM701780     1  0.0000      0.955 1.000 0.000
#> GSM701779     2  0.0000      0.957 0.000 1.000
#> GSM701778     1  0.7219      0.807 0.800 0.200
#> GSM701777     1  0.5178      0.899 0.884 0.116
#> GSM701776     1  0.0000      0.955 1.000 0.000
#> GSM701816     1  0.3879      0.922 0.924 0.076
#> GSM701815     2  0.0000      0.957 0.000 1.000
#> GSM701814     2  0.0000      0.957 0.000 1.000
#> GSM701813     1  0.4562      0.912 0.904 0.096
#> GSM701812     1  0.0000      0.955 1.000 0.000
#> GSM701811     1  0.0000      0.955 1.000 0.000
#> GSM701786     1  0.0000      0.955 1.000 0.000
#> GSM701785     2  0.0000      0.957 0.000 1.000
#> GSM701784     2  0.0000      0.957 0.000 1.000
#> GSM701783     1  0.0000      0.955 1.000 0.000
#> GSM701782     1  0.4815      0.908 0.896 0.104
#> GSM701781     1  0.4815      0.908 0.896 0.104
#> GSM701822     2  0.0000      0.957 0.000 1.000
#> GSM701821     1  0.4815      0.908 0.896 0.104
#> GSM701820     2  0.4298      0.908 0.088 0.912
#> GSM701819     1  0.0000      0.955 1.000 0.000
#> GSM701818     1  0.0000      0.955 1.000 0.000
#> GSM701817     1  0.0000      0.955 1.000 0.000
#> GSM701790     2  0.4815      0.897 0.104 0.896
#> GSM701789     1  0.0000      0.955 1.000 0.000
#> GSM701788     1  0.0000      0.955 1.000 0.000
#> GSM701787     2  0.0000      0.957 0.000 1.000
#> GSM701824     2  0.7139      0.807 0.196 0.804
#> GSM701823     2  0.0000      0.957 0.000 1.000
#> GSM701791     2  0.0000      0.957 0.000 1.000
#> GSM701793     2  0.6712      0.831 0.176 0.824
#> GSM701792     2  0.4815      0.897 0.104 0.896
#> GSM701825     2  0.4815      0.897 0.104 0.896
#> GSM701827     2  0.0000      0.957 0.000 1.000
#> GSM701826     2  0.0000      0.957 0.000 1.000
#> GSM701797     1  0.4815      0.908 0.896 0.104
#> GSM701796     1  0.0000      0.955 1.000 0.000
#> GSM701795     2  0.0000      0.957 0.000 1.000
#> GSM701794     2  0.0000      0.957 0.000 1.000
#> GSM701831     1  0.4815      0.908 0.896 0.104
#> GSM701830     2  0.0000      0.957 0.000 1.000
#> GSM701829     1  0.8608      0.682 0.716 0.284
#> GSM701828     2  0.0000      0.957 0.000 1.000
#> GSM701798     2  0.0000      0.957 0.000 1.000
#> GSM701802     1  0.4939      0.905 0.892 0.108
#> GSM701801     1  0.0000      0.955 1.000 0.000
#> GSM701800     1  0.0000      0.955 1.000 0.000
#> GSM701799     2  0.0000      0.957 0.000 1.000
#> GSM701832     2  0.4815      0.897 0.104 0.896
#> GSM701835     2  0.0000      0.957 0.000 1.000
#> GSM701834     2  0.0000      0.957 0.000 1.000
#> GSM701833     2  0.0000      0.957 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>              n  age(p) time(p) tissue(p) individual(p) k
#> ATC:skmeans 70 0.00858 0.57771     0.821      0.000987 2
#> ATC:skmeans 66 0.05599 0.00329     0.821      0.057836 3
#> ATC:skmeans 66 0.03836 0.00909     0.906      0.008001 4
#> ATC:skmeans 58 0.01640 0.01457     0.813      0.005752 5
#> ATC:skmeans 55 0.00856 0.01969     0.881      0.011239 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:pam**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "pam"]
# you can also extract it by
# res = res_list["ATC:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.627           0.889       0.922         0.4815 0.496   0.496
#> 3 3 1.000           0.970       0.987         0.3137 0.755   0.558
#> 4 4 0.743           0.840       0.907         0.1647 0.896   0.717
#> 5 5 0.802           0.834       0.905         0.0805 0.823   0.445
#> 6 6 0.763           0.632       0.783         0.0356 0.977   0.889

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.937 1.000 0.000
#> GSM701769     1  0.0000      0.937 1.000 0.000
#> GSM701768     2  0.8144      0.810 0.252 0.748
#> GSM701767     2  0.6887      0.836 0.184 0.816
#> GSM701766     1  0.8144      0.695 0.748 0.252
#> GSM701806     1  0.0000      0.937 1.000 0.000
#> GSM701805     1  0.0000      0.937 1.000 0.000
#> GSM701804     2  0.6887      0.836 0.184 0.816
#> GSM701803     1  0.3733      0.937 0.928 0.072
#> GSM701775     1  0.0000      0.937 1.000 0.000
#> GSM701774     1  0.3733      0.937 0.928 0.072
#> GSM701773     2  0.0000      0.888 0.000 1.000
#> GSM701772     1  0.4562      0.923 0.904 0.096
#> GSM701771     1  0.0000      0.937 1.000 0.000
#> GSM701810     1  0.0000      0.937 1.000 0.000
#> GSM701809     2  0.6801      0.839 0.180 0.820
#> GSM701808     1  0.5294      0.828 0.880 0.120
#> GSM701807     1  0.0000      0.937 1.000 0.000
#> GSM701780     1  0.3733      0.937 0.928 0.072
#> GSM701779     2  0.0000      0.888 0.000 1.000
#> GSM701778     1  0.7674      0.775 0.776 0.224
#> GSM701777     1  0.4690      0.920 0.900 0.100
#> GSM701776     1  0.0000      0.937 1.000 0.000
#> GSM701816     2  0.8555      0.720 0.280 0.720
#> GSM701815     2  0.0000      0.888 0.000 1.000
#> GSM701814     2  0.0000      0.888 0.000 1.000
#> GSM701813     1  0.3733      0.937 0.928 0.072
#> GSM701812     1  0.0000      0.937 1.000 0.000
#> GSM701811     1  0.0000      0.937 1.000 0.000
#> GSM701786     1  0.0000      0.937 1.000 0.000
#> GSM701785     1  0.4562      0.923 0.904 0.096
#> GSM701784     2  0.9209      0.607 0.336 0.664
#> GSM701783     1  0.0000      0.937 1.000 0.000
#> GSM701782     1  0.4431      0.930 0.908 0.092
#> GSM701781     1  0.4431      0.930 0.908 0.092
#> GSM701822     2  0.0000      0.888 0.000 1.000
#> GSM701821     1  0.4431      0.930 0.908 0.092
#> GSM701820     2  0.6148      0.854 0.152 0.848
#> GSM701819     1  0.2778      0.940 0.952 0.048
#> GSM701818     1  0.2423      0.940 0.960 0.040
#> GSM701817     1  0.3733      0.937 0.928 0.072
#> GSM701790     2  0.7674      0.828 0.224 0.776
#> GSM701789     1  0.0000      0.937 1.000 0.000
#> GSM701788     1  0.0000      0.937 1.000 0.000
#> GSM701787     2  0.1414      0.887 0.020 0.980
#> GSM701824     2  0.8207      0.807 0.256 0.744
#> GSM701823     2  0.0000      0.888 0.000 1.000
#> GSM701791     2  0.0938      0.888 0.012 0.988
#> GSM701793     2  0.8207      0.807 0.256 0.744
#> GSM701792     2  0.8207      0.807 0.256 0.744
#> GSM701825     2  0.7674      0.828 0.224 0.776
#> GSM701827     2  0.0000      0.888 0.000 1.000
#> GSM701826     2  0.1414      0.887 0.020 0.980
#> GSM701797     1  0.4431      0.930 0.908 0.092
#> GSM701796     1  0.0000      0.937 1.000 0.000
#> GSM701795     2  0.0000      0.888 0.000 1.000
#> GSM701794     2  0.0000      0.888 0.000 1.000
#> GSM701831     1  0.4431      0.930 0.908 0.092
#> GSM701830     2  0.0000      0.888 0.000 1.000
#> GSM701829     1  0.4431      0.930 0.908 0.092
#> GSM701828     2  0.5946      0.857 0.144 0.856
#> GSM701798     2  0.0376      0.887 0.004 0.996
#> GSM701802     1  0.3733      0.937 0.928 0.072
#> GSM701801     1  0.3584      0.938 0.932 0.068
#> GSM701800     1  0.3733      0.937 0.928 0.072
#> GSM701799     2  0.0000      0.888 0.000 1.000
#> GSM701832     2  0.6973      0.836 0.188 0.812
#> GSM701835     2  0.7056      0.830 0.192 0.808
#> GSM701834     2  0.0000      0.888 0.000 1.000
#> GSM701833     2  0.0000      0.888 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n  age(p) time(p) tissue(p) individual(p) k
#> ATC:pam 70 0.00236 0.91333     0.246      0.000845 2
#> ATC:pam 70 0.02785 0.02561     0.957      0.011858 3
#> ATC:pam 67 0.04068 0.00143     0.992      0.045176 4
#> ATC:pam 67 0.00262 0.05116     0.681      0.019304 5
#> ATC:pam 57 0.00142 0.36037     0.307      0.005336 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:mclust**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "mclust"]
# you can also extract it by
# res = res_list["ATC:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.963           0.940       0.968          0.294 0.731   0.731
#> 3 3 0.397           0.536       0.792          0.560 0.930   0.905
#> 4 4 0.262           0.571       0.734          0.281 0.636   0.484
#> 5 5 0.428           0.716       0.796          0.151 0.855   0.659
#> 6 6 0.432           0.343       0.577          0.105 0.876   0.630

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     2  0.3274      0.940 0.060 0.940
#> GSM701769     2  0.0376      0.967 0.004 0.996
#> GSM701768     1  0.7299      0.728 0.796 0.204
#> GSM701767     1  0.1184      0.971 0.984 0.016
#> GSM701766     2  0.0672      0.967 0.008 0.992
#> GSM701806     2  0.1184      0.965 0.016 0.984
#> GSM701805     1  0.1414      0.969 0.980 0.020
#> GSM701804     1  0.0376      0.973 0.996 0.004
#> GSM701803     1  0.0938      0.972 0.988 0.012
#> GSM701775     2  0.0672      0.967 0.008 0.992
#> GSM701774     2  0.0376      0.967 0.004 0.996
#> GSM701773     2  0.1414      0.963 0.020 0.980
#> GSM701772     2  0.0672      0.967 0.008 0.992
#> GSM701771     1  0.0376      0.973 0.996 0.004
#> GSM701810     2  0.0672      0.967 0.008 0.992
#> GSM701809     2  0.2948      0.947 0.052 0.948
#> GSM701808     2  0.0672      0.967 0.008 0.992
#> GSM701807     1  0.0376      0.973 0.996 0.004
#> GSM701780     2  0.0000      0.967 0.000 1.000
#> GSM701779     1  0.0672      0.971 0.992 0.008
#> GSM701778     2  0.1184      0.964 0.016 0.984
#> GSM701777     2  0.0000      0.967 0.000 1.000
#> GSM701776     1  0.0376      0.973 0.996 0.004
#> GSM701816     2  0.0000      0.967 0.000 1.000
#> GSM701815     2  0.7219      0.777 0.200 0.800
#> GSM701814     2  0.0376      0.966 0.004 0.996
#> GSM701813     2  0.2423      0.950 0.040 0.960
#> GSM701812     2  0.0672      0.967 0.008 0.992
#> GSM701811     2  0.7528      0.760 0.216 0.784
#> GSM701786     2  0.0672      0.967 0.008 0.992
#> GSM701785     2  0.0000      0.967 0.000 1.000
#> GSM701784     2  0.0672      0.967 0.008 0.992
#> GSM701783     2  0.1184      0.965 0.016 0.984
#> GSM701782     2  0.0000      0.967 0.000 1.000
#> GSM701781     2  0.9970      0.151 0.468 0.532
#> GSM701822     2  0.0376      0.966 0.004 0.996
#> GSM701821     2  0.0000      0.967 0.000 1.000
#> GSM701820     2  0.2236      0.956 0.036 0.964
#> GSM701819     2  0.0000      0.967 0.000 1.000
#> GSM701818     2  0.7139      0.778 0.196 0.804
#> GSM701817     2  0.2423      0.950 0.040 0.960
#> GSM701790     2  0.2423      0.954 0.040 0.960
#> GSM701789     2  0.0672      0.967 0.008 0.992
#> GSM701788     2  0.0672      0.967 0.008 0.992
#> GSM701787     2  0.4690      0.906 0.100 0.900
#> GSM701824     2  0.2423      0.954 0.040 0.960
#> GSM701823     1  0.0672      0.971 0.992 0.008
#> GSM701791     2  0.2778      0.946 0.048 0.952
#> GSM701793     2  0.2423      0.954 0.040 0.960
#> GSM701792     2  0.0672      0.967 0.008 0.992
#> GSM701825     1  0.0376      0.973 0.996 0.004
#> GSM701827     2  0.4298      0.912 0.088 0.912
#> GSM701826     2  0.0938      0.966 0.012 0.988
#> GSM701797     2  0.0000      0.967 0.000 1.000
#> GSM701796     2  0.0672      0.967 0.008 0.992
#> GSM701795     2  0.0938      0.966 0.012 0.988
#> GSM701794     2  0.0376      0.966 0.004 0.996
#> GSM701831     2  0.0000      0.967 0.000 1.000
#> GSM701830     2  0.0376      0.966 0.004 0.996
#> GSM701829     2  0.0000      0.967 0.000 1.000
#> GSM701828     2  0.0672      0.967 0.008 0.992
#> GSM701798     2  0.2778      0.946 0.048 0.952
#> GSM701802     2  0.0000      0.967 0.000 1.000
#> GSM701801     2  0.0000      0.967 0.000 1.000
#> GSM701800     2  0.0000      0.967 0.000 1.000
#> GSM701799     2  0.0376      0.966 0.004 0.996
#> GSM701832     2  0.0672      0.967 0.008 0.992
#> GSM701835     2  0.0000      0.967 0.000 1.000
#> GSM701834     2  0.0376      0.966 0.004 0.996
#> GSM701833     2  0.0376      0.966 0.004 0.996

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n   age(p) time(p) tissue(p) individual(p) k
#> ATC:mclust 69 1.57e-01  0.0124     0.875        0.0438 2
#> ATC:mclust 45 2.03e-01  0.0564     0.707        0.1460 3
#> ATC:mclust 56 2.08e-04  0.0786     0.637        0.0114 4
#> ATC:mclust 63 7.49e-05  0.0517     0.775        0.0124 5
#> ATC:mclust 17 2.20e-03  0.0204     0.755        0.0860 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "NMF"]
# you can also extract it by
# res = res_list["ATC:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 51941 rows and 70 columns.
#>   Top rows (1000, 2000, 3000, 4000, 5000) are extracted by 'ATC' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.778           0.912       0.959         0.4795 0.519   0.519
#> 3 3 0.672           0.810       0.906         0.3689 0.713   0.497
#> 4 4 0.439           0.326       0.593         0.1126 0.798   0.494
#> 5 5 0.442           0.369       0.646         0.0745 0.829   0.468
#> 6 6 0.508           0.346       0.585         0.0481 0.872   0.491

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> GSM701770     1  0.0000      0.955 1.000 0.000
#> GSM701769     1  0.0000      0.955 1.000 0.000
#> GSM701768     2  0.4939      0.870 0.108 0.892
#> GSM701767     1  0.7528      0.745 0.784 0.216
#> GSM701766     1  0.6531      0.808 0.832 0.168
#> GSM701806     1  0.0000      0.955 1.000 0.000
#> GSM701805     1  0.0000      0.955 1.000 0.000
#> GSM701804     1  0.8661      0.624 0.712 0.288
#> GSM701803     1  0.0000      0.955 1.000 0.000
#> GSM701775     1  0.0000      0.955 1.000 0.000
#> GSM701774     1  0.0000      0.955 1.000 0.000
#> GSM701773     2  0.0000      0.955 0.000 1.000
#> GSM701772     1  0.2236      0.932 0.964 0.036
#> GSM701771     1  0.0000      0.955 1.000 0.000
#> GSM701810     1  0.0000      0.955 1.000 0.000
#> GSM701809     2  0.8443      0.636 0.272 0.728
#> GSM701808     1  0.0000      0.955 1.000 0.000
#> GSM701807     1  0.0000      0.955 1.000 0.000
#> GSM701780     1  0.0000      0.955 1.000 0.000
#> GSM701779     2  0.0000      0.955 0.000 1.000
#> GSM701778     1  0.6712      0.798 0.824 0.176
#> GSM701777     1  0.2603      0.926 0.956 0.044
#> GSM701776     1  0.0000      0.955 1.000 0.000
#> GSM701816     1  0.1414      0.943 0.980 0.020
#> GSM701815     2  0.1184      0.947 0.016 0.984
#> GSM701814     2  0.0672      0.952 0.008 0.992
#> GSM701813     1  0.0000      0.955 1.000 0.000
#> GSM701812     1  0.0000      0.955 1.000 0.000
#> GSM701811     1  0.0000      0.955 1.000 0.000
#> GSM701786     1  0.0000      0.955 1.000 0.000
#> GSM701785     1  0.7674      0.733 0.776 0.224
#> GSM701784     2  0.9170      0.509 0.332 0.668
#> GSM701783     1  0.0000      0.955 1.000 0.000
#> GSM701782     1  0.0000      0.955 1.000 0.000
#> GSM701781     1  0.0000      0.955 1.000 0.000
#> GSM701822     2  0.0000      0.955 0.000 1.000
#> GSM701821     1  0.0000      0.955 1.000 0.000
#> GSM701820     2  0.1843      0.939 0.028 0.972
#> GSM701819     1  0.0000      0.955 1.000 0.000
#> GSM701818     1  0.0000      0.955 1.000 0.000
#> GSM701817     1  0.0000      0.955 1.000 0.000
#> GSM701790     2  0.0000      0.955 0.000 1.000
#> GSM701789     1  0.0000      0.955 1.000 0.000
#> GSM701788     1  0.0000      0.955 1.000 0.000
#> GSM701787     2  0.0000      0.955 0.000 1.000
#> GSM701824     1  0.9460      0.421 0.636 0.364
#> GSM701823     2  0.0000      0.955 0.000 1.000
#> GSM701791     2  0.0000      0.955 0.000 1.000
#> GSM701793     2  0.4939      0.870 0.108 0.892
#> GSM701792     2  0.1633      0.942 0.024 0.976
#> GSM701825     2  0.0376      0.953 0.004 0.996
#> GSM701827     2  0.0000      0.955 0.000 1.000
#> GSM701826     2  0.0000      0.955 0.000 1.000
#> GSM701797     1  0.0000      0.955 1.000 0.000
#> GSM701796     1  0.0000      0.955 1.000 0.000
#> GSM701795     2  0.0000      0.955 0.000 1.000
#> GSM701794     2  0.0000      0.955 0.000 1.000
#> GSM701831     1  0.0000      0.955 1.000 0.000
#> GSM701830     2  0.0000      0.955 0.000 1.000
#> GSM701829     1  0.4298      0.890 0.912 0.088
#> GSM701828     2  0.0000      0.955 0.000 1.000
#> GSM701798     2  0.0000      0.955 0.000 1.000
#> GSM701802     1  0.0938      0.948 0.988 0.012
#> GSM701801     1  0.0000      0.955 1.000 0.000
#> GSM701800     1  0.0000      0.955 1.000 0.000
#> GSM701799     2  0.0000      0.955 0.000 1.000
#> GSM701832     2  0.7056      0.767 0.192 0.808
#> GSM701835     1  0.5946      0.835 0.856 0.144
#> GSM701834     2  0.0000      0.955 0.000 1.000
#> GSM701833     2  0.0000      0.955 0.000 1.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n  age(p)  time(p) tissue(p) individual(p) k
#> ATC:NMF 69 0.00101 0.577211   0.62846      0.000073 2
#> ATC:NMF 65 0.02864 0.000449   0.80304      0.023585 3
#> ATC:NMF 25 0.80468 0.451307   0.49106      0.123489 4
#> ATC:NMF 26 0.18704 0.018977   0.01857      0.357351 5
#> ATC:NMF 13 0.14618 0.082550   0.00486      0.142297 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.

Session info

sessionInfo()
#> R version 3.6.0 (2019-04-26)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: CentOS Linux 7 (Core)
#> 
#> Matrix products: default
#> BLAS:   /usr/lib64/libblas.so.3.4.2
#> LAPACK: /usr/lib64/liblapack.so.3.4.2
#> 
#> locale:
#>  [1] LC_CTYPE=en_GB.UTF-8       LC_NUMERIC=C               LC_TIME=en_GB.UTF-8       
#>  [4] LC_COLLATE=en_GB.UTF-8     LC_MONETARY=en_GB.UTF-8    LC_MESSAGES=en_GB.UTF-8   
#>  [7] LC_PAPER=en_GB.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
#> [10] LC_TELEPHONE=C             LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C       
#> 
#> attached base packages:
#> [1] grid      stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] genefilter_1.66.0    ComplexHeatmap_2.3.1 markdown_1.1         knitr_1.26          
#> [5] GetoptLong_0.1.7     cola_1.3.2          
#> 
#> loaded via a namespace (and not attached):
#>  [1] circlize_0.4.8       shape_1.4.4          xfun_0.11            slam_0.1-46         
#>  [5] lattice_0.20-38      splines_3.6.0        colorspace_1.4-1     vctrs_0.2.0         
#>  [9] stats4_3.6.0         blob_1.2.0           XML_3.98-1.20        survival_2.44-1.1   
#> [13] rlang_0.4.2          pillar_1.4.2         DBI_1.0.0            BiocGenerics_0.30.0 
#> [17] bit64_0.9-7          RColorBrewer_1.1-2   matrixStats_0.55.0   stringr_1.4.0       
#> [21] GlobalOptions_0.1.1  evaluate_0.14        memoise_1.1.0        Biobase_2.44.0      
#> [25] IRanges_2.18.3       parallel_3.6.0       AnnotationDbi_1.46.1 highr_0.8           
#> [29] Rcpp_1.0.3           xtable_1.8-4         backports_1.1.5      S4Vectors_0.22.1    
#> [33] annotate_1.62.0      skmeans_0.2-11       bit_1.1-14           microbenchmark_1.4-7
#> [37] brew_1.0-6           impute_1.58.0        rjson_0.2.20         png_0.1-7           
#> [41] digest_0.6.23        stringi_1.4.3        polyclip_1.10-0      clue_0.3-57         
#> [45] tools_3.6.0          bitops_1.0-6         magrittr_1.5         eulerr_6.0.0        
#> [49] RCurl_1.95-4.12      RSQLite_2.1.4        tibble_2.1.3         cluster_2.1.0       
#> [53] crayon_1.3.4         pkgconfig_2.0.3      zeallot_0.1.0        Matrix_1.2-17       
#> [57] xml2_1.2.2           httr_1.4.1           R6_2.4.1             mclust_5.4.5        
#> [61] compiler_3.6.0