diff --git a/README.md b/README.md index 9cadbc2..6caadc9 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,61 @@ -# SSB_HighSpeed_Modem -8PSK/QPSK Modem to send Images/Data via a 2,7kHz SSB channel in high speed +# QO-100-modem +The purpose of this project is to transfer data (pictures...) via a 2,7kHz SSB channel on the narrow band transponder as fast as possible. + +# this is work in progress +Version 0.1 is working on my linux PC and Odroid SBC. + +# Prerequisites +* LINUX Desktop PC ... working +* Raspberry PI 4 ... working +* Raspberry PI 3B+ ... working, but not 100% error free in fullduplex mode (RX only or TX only is working) +* Odroid N2 ... working +* Odroid C2 ... working +* Odroid C4 ... working + +* GNU Radio Version 3.8.x. + +* Raspberry: Raspian OS ist NOT working, instead Ubuntu 64bit is required + +* Application Software "oscardata.exe" running on Windows, Linux, (possibly MAC-OS, not tested) + + +# building the software +1. go into the folder "modem" +2. run "make" + + +# starting the modem and application +1. go into the folder "modem" +2. run the software: ./qo100modem +command line parameters: +no parameter ... normal usage +-m IP ... specify the V4 IP adress of the device where the application software is running. This is useful if you have more than one qo100modem running simultaneously. Without this parameter the app will search the modem automatically. +-e 1 ... do NOT start the GNU Radio files automatically. This is useful if you want to work on the GR Flowgraphs and want to start it manually. + +3. start the user application on any PC in your home network. It will find the modem automatically +The file is located in QO-100-modem/oscardata/oscardata/bin/Release +On windows just start oscardata.exe +On Linux start it with: mono oscardata.exe + +# tested scenarious + +* QO-100 via IC-9700, IC-7300 or IC-7100 ... working +* Short Wave 6m band via IC-7300, IC-7100 ... working. In case of significant noise, use the lowest bit rate (3000 bit/s) + +# usage + +In the IC-9700 activate the DATA mode and the RX filter FIL1 to full range of 3.6kHz. + +In oscardata.exe go to the "BER" tab. Then click START. If you change the bitrate, wait a few seconds before starting again. + +The program is now sending test data frames to the default sound card. If your sound card is properly connected to the transceiver then switch the transceiver to TX and the data will be sent to QO-100. +Receive your transmission, feed it to the default soundcard. As soon as oscardata.exe detects a correct data frame it will display status messages on the screen. + +(For testing purposes you can just connect Line-Out of your soundcard with Line-IN with a cable.) + +To assign the soundcard to the modem I recommend to use pavucontrol. Using the TX volume set a signal level of about 20 to 24 dB over noise floor. You will need about -10dB compared to the BPSK400 beacon. The received audio volume can be adjusted with help of the spectrum display in oscardata.exe- + +Now as the transmission is OK, you can go to the "Image RX/TX" tab. First, select a picture quality then load a picture and finally press SEND to send it to QO-100. When you correctly receive your own transmission the RX picture will be displayed line by line. + +vy 73, DJ0ABR + diff --git a/grc/8psk_rx.grc b/grc/8psk_rx.grc new file mode 100644 index 0000000..2b0a055 --- /dev/null +++ b/grc/8psk_rx.grc @@ -0,0 +1,919 @@ +options: + parameters: + author: kurt + category: '[GRC Hier Blocks]' + cmake_opt: '' + comment: 'requires GNU Radio 3.8xxx + + does NOT work with 3.7x' + copyright: '' + description: requires GNU Radio 3.8xxx + gen_cmake: 'Off' + gen_linking: dynamic + generate_options: no_gui + hier_block_src_path: '.:' + id: rx_8psk + max_nouts: '0' + output_language: python + placement: (0,0) + qt_qss_theme: '' + realtime_scheduling: '' + run: 'True' + run_command: '{python} -u {filename}' + run_options: run + sizing_mode: fixed + thread_safe_setters: '' + title: 8PSK Modem DJ0ABR + window_size: '' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [16, 12.0] + rotation: 0 + state: enabled + +blocks: +- name: mixf + id: variable + parameters: + comment: 'mid frequency + + in the audio + + spectrum. Set to get + + lowest and highest + + frequency within the + + transceiver filter range.' + value: '1500' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [336, 12.0] + rotation: 0 + state: enabled +- name: nfilts + id: variable + parameters: + comment: '' + value: '32' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1112, 12.0] + rotation: 0 + state: enabled +- name: outputsps + id: variable + parameters: + comment: 'Samples/Symbol + + fixed value, + + do not change. + + Used to adjust + + bitrate vs. bandwidth' + value: '7' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [40, 180.0] + rotation: 0 + state: enabled +- name: rrc_taps + id: variable + parameters: + comment: '' + value: firdes.root_raised_cosine(nfilts, nfilts, 1.1/float(sps), 0.2, 11*sps*nfilts) + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1184, 12.0] + rotation: 0 + state: enabled +- name: sps + id: variable + parameters: + comment: 'Samples/Symbol + + fixed value, + + do not change' + value: '4' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [488, 12.0] + rotation: 0 + state: enabled +- name: analog_agc2_xx_0_0 + id: analog_agc2_xx + parameters: + affinity: '' + alias: '' + attack_rate: 1e-2 + comment: Costas loop needs AGC (loop gain depends on input level) + decay_rate: '0.2' + gain: '2' + max_gain: '3' + maxoutbuf: '0' + minoutbuf: '0' + reference: '1' + type: complex + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [400, 620.0] + rotation: 0 + state: enabled +- name: analog_const_source_x_0 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: 'Marker to find the start + + of the values' + const: '1000' + maxoutbuf: '0' + minoutbuf: '0' + type: int + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [472, 916.0] + rotation: 180 + state: true +- name: analog_const_source_x_0_0 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: 'Marker to find the start + + of the values' + const: '1000' + maxoutbuf: '0' + minoutbuf: '0' + type: int + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [536, 1076.0] + rotation: 180 + state: true +- name: analog_const_source_x_0_1 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: '' + const: '16777216' + maxoutbuf: '0' + minoutbuf: '0' + type: float + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [792, 1044.0] + rotation: 180 + state: true +- name: analog_sig_source_x_0_0_0 + id: analog_sig_source_x + parameters: + affinity: '' + alias: '' + amp: '1' + comment: 'the modulator output is in the baseband at 0 Hz. + + Mix it with the required audio mid frequency. + + cos and -sin are used to combine I and Q + + into the frinal signal. + + Use it als for RX in the reverse direction' + freq: mixf + maxoutbuf: '0' + minoutbuf: '0' + offset: '0' + phase: '0' + samp_rate: samp_rate + type: complex + waveform: analog.GR_COS_WAVE + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [592, 76.0] + rotation: 0 + state: enabled +- name: audio_source_0 + id: audio_source + parameters: + affinity: '' + alias: '' + comment: get audio from transceiver + device_name: '' + maxoutbuf: '0' + minoutbuf: '0' + num_outputs: '1' + ok_to_block: 'True' + samp_rate: samp_rate + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1384, 364.0] + rotation: 180 + state: true +- name: blocks_complex_to_float_0 + id: blocks_complex_to_float + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [824, 1160.0] + rotation: 180 + state: true +- name: blocks_complex_to_float_1 + id: blocks_complex_to_float + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [816, 104.0] + rotation: 0 + state: enabled +- name: blocks_float_to_complex_0 + id: blocks_float_to_complex + parameters: + affinity: '' + alias: '' + comment: 'combile I and Q + + to complex signal' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [552, 336.0] + rotation: 180 + state: true +- name: blocks_float_to_int_0 + id: blocks_float_to_int + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + scale: '16777216' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [632, 1156.0] + rotation: 180 + state: true +- name: blocks_float_to_int_0_0 + id: blocks_float_to_int + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + scale: '16777216' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [632, 1204.0] + rotation: 180 + state: true +- name: blocks_float_to_int_0_1 + id: blocks_float_to_int + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + scale: '1' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [520, 1012.0] + rotation: 180 + state: true +- name: blocks_interleave_0 + id: blocks_interleave + parameters: + affinity: '' + alias: '' + blocksize: '1' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + num_streams: '2' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [360, 1000.0] + rotation: 180 + state: true +- name: blocks_interleave_0_0 + id: blocks_interleave + parameters: + affinity: '' + alias: '' + blocksize: '1' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + num_streams: '3' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [328, 1128.0] + rotation: 180 + state: true +- name: blocks_multiply_xx_0_0_0 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: make I + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [744, 304.0] + rotation: 180 + state: enabled +- name: blocks_multiply_xx_0_1 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: make Q + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [808, 400.0] + rotation: 180 + state: enabled +- name: blocks_multiply_xx_0_1_0 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [648, 984.0] + rotation: 180 + state: enabled +- name: blocks_udp_sink_0 + id: blocks_udp_sink + parameters: + affinity: '' + alias: '' + comment: 'send RX data to UDP + + port 1235 on the local machine' + eof: 'False' + ipaddr: 127.0.0.1 + port: '40135' + psize: '344' + type: byte + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1272, 516.0] + rotation: 0 + state: true +- name: blocks_udp_sink_0_0 + id: blocks_udp_sink + parameters: + affinity: '' + alias: '' + comment: 'send QPSK Constellation data to UDP + + port 1236 on the local machine' + eof: 'False' + ipaddr: 127.0.0.1 + port: '40136' + psize: '120' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [72, 1004.0] + rotation: 180 + state: enabled +- name: blocks_udp_sink_0_0_0 + id: blocks_udp_sink + parameters: + affinity: '' + alias: '' + comment: 'send QPSK Constellation data to UDP + + port 1236 on the local machine' + eof: 'False' + ipaddr: 127.0.0.1 + port: '40137' + psize: '120' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [72, 1132.0] + rotation: 180 + state: enabled +- name: digital_constellation_decoder_cb_0 + id: digital_constellation_decoder_cb + parameters: + affinity: '' + alias: '' + comment: '8PSK decoding, same + + parameters as modulator' + constellation: digital.constellation_8psk_natural().base() + maxoutbuf: '0' + minoutbuf: '0' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [824, 540.0] + rotation: 0 + state: enabled +- name: digital_costas_loop_cc_0 + id: digital_costas_loop_cc + parameters: + affinity: '' + alias: '' + comment: 'locks the signal and + + converts into baseband' + maxoutbuf: '0' + minoutbuf: '0' + order: '8' + use_snr: 'False' + w: '0.15' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [584, 528.0] + rotation: 0 + state: enabled +- name: digital_diff_decoder_bb_0 + id: digital_diff_decoder_bb + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + modulus: '8' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1048, 540.0] + rotation: 0 + state: enabled +- name: digital_lms_dd_equalizer_cc_0 + id: digital_lms_dd_equalizer_cc + parameters: + affinity: '' + alias: '' + cnst: digital.constellation_8psk_natural().base() + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + mu: '0.01' + num_taps: '15' + sps: outputsps + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [304, 468.0] + rotation: 0 + state: enabled +- name: digital_pfb_clock_sync_xxx_0 + id: digital_pfb_clock_sync_xxx + parameters: + affinity: '' + alias: '' + comment: 'synchronize the Clock, + + works very well with drifting + + QO-100 signal' + filter_size: nfilts + init_phase: nfilts/16 + loop_bw: '0.06' + max_dev: '2' + maxoutbuf: '0' + minoutbuf: '0' + osps: outputsps + sps: sps + taps: rrc_taps + type: ccf + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [80, 556.0] + rotation: 0 + state: enabled +- name: low_pass_filter_0 + id: low_pass_filter + parameters: + affinity: '' + alias: '' + beta: '6.76' + comment: 'Anti-Aliasing filter + + Level correction + + and decimation' + cutoff_freq: '3900' + decim: '1' + gain: '12' + interp: '1' + maxoutbuf: '0' + minoutbuf: '0' + samp_rate: samp_rate + type: fir_filter_fff + width: '3300' + win: firdes.WIN_HAMMING + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1048, 316.0] + rotation: 180 + state: enabled +- name: mmse_resampler_xx_0 + id: mmse_resampler_xx + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + phase_shift: '0' + resamp_ratio: resamp + type: complex + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [136, 352.0] + rotation: 180 + state: true +- name: mmse_resampler_xx_0_0 + id: mmse_resampler_xx + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + phase_shift: '0' + resamp_ratio: samp_rate / 8000 + type: float + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [776, 904.0] + rotation: 180 + state: true +- name: qtgui_const_sink_x_0 + id: qtgui_const_sink_x + parameters: + affinity: '' + alias: '' + alpha1: '1.0' + alpha10: '1.0' + alpha2: '1.0' + alpha3: '1.0' + alpha4: '1.0' + alpha5: '1.0' + alpha6: '1.0' + alpha7: '1.0' + alpha8: '1.0' + alpha9: '1.0' + autoscale: 'False' + axislabels: 'True' + color1: '"blue"' + color10: '"red"' + color2: '"red"' + color3: '"red"' + color4: '"red"' + color5: '"red"' + color6: '"red"' + color7: '"red"' + color8: '"red"' + color9: '"red"' + comment: '' + grid: 'False' + gui_hint: '' + label1: '' + label10: '' + label2: '' + label3: '' + label4: '' + label5: '' + label6: '' + label7: '' + label8: '' + label9: '' + legend: 'True' + marker1: '0' + marker10: '0' + marker2: '0' + marker3: '0' + marker4: '0' + marker5: '0' + marker6: '0' + marker7: '0' + marker8: '0' + marker9: '0' + name: '""' + nconnections: '1' + size: '1024' + style1: '0' + style10: '0' + style2: '0' + style3: '0' + style4: '0' + style5: '0' + style6: '0' + style7: '0' + style8: '0' + style9: '0' + tr_chan: '0' + tr_level: '0.0' + tr_mode: qtgui.TRIG_MODE_FREE + tr_slope: qtgui.TRIG_SLOPE_POS + tr_tag: '""' + type: complex + update_time: '0.10' + width1: '1' + width10: '1' + width2: '1' + width3: '1' + width4: '1' + width5: '1' + width6: '1' + width7: '1' + width8: '1' + width9: '1' + xmax: '2' + xmin: '-2' + ymax: '2' + ymin: '-2' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [824, 756.0] + rotation: 0 + state: disabled +- name: qtgui_const_sink_x_0_0 + id: qtgui_const_sink_x + parameters: + affinity: '' + alias: '' + alpha1: '1.0' + alpha10: '1.0' + alpha2: '1.0' + alpha3: '1.0' + alpha4: '1.0' + alpha5: '1.0' + alpha6: '1.0' + alpha7: '1.0' + alpha8: '1.0' + alpha9: '1.0' + autoscale: 'False' + axislabels: 'True' + color1: '"blue"' + color10: '"red"' + color2: '"red"' + color3: '"red"' + color4: '"red"' + color5: '"red"' + color6: '"red"' + color7: '"red"' + color8: '"red"' + color9: '"red"' + comment: '' + grid: 'False' + gui_hint: '' + label1: '' + label10: '' + label2: '' + label3: '' + label4: '' + label5: '' + label6: '' + label7: '' + label8: '' + label9: '' + legend: 'True' + marker1: '0' + marker10: '0' + marker2: '0' + marker3: '0' + marker4: '0' + marker5: '0' + marker6: '0' + marker7: '0' + marker8: '0' + marker9: '0' + name: '""' + nconnections: '1' + size: '1024' + style1: '0' + style10: '0' + style2: '0' + style3: '0' + style4: '0' + style5: '0' + style6: '0' + style7: '0' + style8: '0' + style9: '0' + tr_chan: '0' + tr_level: '0.0' + tr_mode: qtgui.TRIG_MODE_FREE + tr_slope: qtgui.TRIG_SLOPE_POS + tr_tag: '""' + type: complex + update_time: '0.10' + width1: '1' + width10: '1' + width2: '1' + width3: '1' + width4: '1' + width5: '1' + width6: '1' + width7: '1' + width8: '1' + width9: '1' + xmax: '2' + xmin: '-2' + ymax: '2' + ymin: '-2' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [864, 628.0] + rotation: 0 + state: disabled +- name: resamp + id: parameter + parameters: + alias: '' + comment: "Resampling Rate\nfrom Audio Rate\nto 8kS/s which is\nthe input of the\ + \ \nPolypashe Clock" + hide: none + label: resamp + short_id: r + type: intx + value: '6' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [328, 156.0] + rotation: 0 + state: true +- name: samp_rate + id: parameter + parameters: + alias: '' + comment: Audio Rate + hide: none + label: samp_rate + short_id: s + type: intx + value: '48000' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [200, 156.0] + rotation: 0 + state: true + +connections: +- [analog_agc2_xx_0_0, '0', digital_costas_loop_cc_0, '0'] +- [analog_const_source_x_0, '0', blocks_interleave_0, '0'] +- [analog_const_source_x_0_0, '0', blocks_interleave_0_0, '0'] +- [analog_const_source_x_0_1, '0', blocks_multiply_xx_0_1_0, '1'] +- [analog_sig_source_x_0_0_0, '0', blocks_complex_to_float_1, '0'] +- [audio_source_0, '0', low_pass_filter_0, '0'] +- [audio_source_0, '0', mmse_resampler_xx_0_0, '0'] +- [blocks_complex_to_float_0, '0', blocks_float_to_int_0, '0'] +- [blocks_complex_to_float_0, '1', blocks_float_to_int_0_0, '0'] +- [blocks_complex_to_float_1, '0', blocks_multiply_xx_0_1, '1'] +- [blocks_complex_to_float_1, '1', blocks_multiply_xx_0_0_0, '1'] +- [blocks_float_to_complex_0, '0', mmse_resampler_xx_0, '0'] +- [blocks_float_to_int_0, '0', blocks_interleave_0_0, '1'] +- [blocks_float_to_int_0_0, '0', blocks_interleave_0_0, '2'] +- [blocks_float_to_int_0_1, '0', blocks_interleave_0, '1'] +- [blocks_interleave_0, '0', blocks_udp_sink_0_0, '0'] +- [blocks_interleave_0_0, '0', blocks_udp_sink_0_0_0, '0'] +- [blocks_multiply_xx_0_0_0, '0', blocks_float_to_complex_0, '0'] +- [blocks_multiply_xx_0_1, '0', blocks_float_to_complex_0, '1'] +- [blocks_multiply_xx_0_1_0, '0', blocks_float_to_int_0_1, '0'] +- [digital_constellation_decoder_cb_0, '0', digital_diff_decoder_bb_0, '0'] +- [digital_costas_loop_cc_0, '0', blocks_complex_to_float_0, '0'] +- [digital_costas_loop_cc_0, '0', digital_constellation_decoder_cb_0, '0'] +- [digital_costas_loop_cc_0, '0', qtgui_const_sink_x_0, '0'] +- [digital_costas_loop_cc_0, '0', qtgui_const_sink_x_0_0, '0'] +- [digital_diff_decoder_bb_0, '0', blocks_udp_sink_0, '0'] +- [digital_lms_dd_equalizer_cc_0, '0', analog_agc2_xx_0_0, '0'] +- [digital_pfb_clock_sync_xxx_0, '0', digital_lms_dd_equalizer_cc_0, '0'] +- [low_pass_filter_0, '0', blocks_multiply_xx_0_0_0, '0'] +- [low_pass_filter_0, '0', blocks_multiply_xx_0_1, '0'] +- [mmse_resampler_xx_0, '0', digital_pfb_clock_sync_xxx_0, '0'] +- [mmse_resampler_xx_0_0, '0', blocks_multiply_xx_0_1_0, '0'] + +metadata: + file_format: 1 diff --git a/grc/8psk_tx.grc b/grc/8psk_tx.grc new file mode 100644 index 0000000..02d63e8 --- /dev/null +++ b/grc/8psk_tx.grc @@ -0,0 +1,391 @@ +options: + parameters: + author: kurt + category: '[GRC Hier Blocks]' + cmake_opt: '' + comment: 'requires GNU Radio 3.8xxx + + does NOT work with 3.7x' + copyright: '' + description: requires GNU Radio 3.8xxx + gen_cmake: 'Off' + gen_linking: dynamic + generate_options: no_gui + hier_block_src_path: '.:' + id: tx_8psk + max_nouts: '0' + output_language: python + placement: (0,0) + qt_qss_theme: '' + realtime_scheduling: '' + run: 'True' + run_command: '{python} -u {filename}' + run_options: run + sizing_mode: fixed + thread_safe_setters: '' + title: 8PSK Modem DJ0ABR + window_size: '' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [8, 8] + rotation: 0 + state: enabled + +blocks: +- name: mixf + id: variable + parameters: + comment: 'mid frequency + + in the audio + + spectrum. Set to get + + lowest and highest + + frequency within the + + transceiver filter range.' + value: '1500' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [336, 12.0] + rotation: 0 + state: enabled +- name: nfilts + id: variable + parameters: + comment: '' + value: '32' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [504, 12.0] + rotation: 0 + state: enabled +- name: sps + id: variable + parameters: + comment: 'Samples/Symbol + + fixed value, + + do not change. + + Used to adjust + + bitrate vs. bandwidth' + value: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [944, 36.0] + rotation: 0 + state: enabled +- name: analog_sig_source_x_0_0_0 + id: analog_sig_source_x + parameters: + affinity: '' + alias: '' + amp: '1' + comment: 'the modulator output is in the baseband at 0 Hz. + + Mix it with the required audio mid frequency. + + cos and -sin are used to combine I and Q + + into the final signal.' + freq: mixf + maxoutbuf: '0' + minoutbuf: '0' + offset: '0' + phase: '0' + samp_rate: samp_rate + type: complex + waveform: analog.GR_COS_WAVE + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [64, 484.0] + rotation: 0 + state: enabled +- name: audio_sink_0_0 + id: audio_sink + parameters: + affinity: '' + alias: '' + comment: 'send audio to + + transceiver' + device_name: '' + num_inputs: '1' + ok_to_block: 'True' + samp_rate: samp_rate + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [840, 420.0] + rotation: 0 + state: enabled +- name: blocks_add_xx_0 + id: blocks_add_xx + parameters: + affinity: '' + alias: '' + comment: 'generate the analog + + output signal.' + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [568, 344.0] + rotation: 0 + state: true +- name: blocks_complex_to_float_1 + id: blocks_complex_to_float + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [400, 344.0] + rotation: 0 + state: enabled +- name: blocks_multiply_const_vxx_0 + id: blocks_multiply_const_vxx + parameters: + affinity: '' + alias: '' + comment: 'reduce level for the + + audio output, improves + + linearity' + const: '0.05' + maxoutbuf: '0' + minoutbuf: '0' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [680, 420.0] + rotation: 0 + state: true +- name: blocks_multiply_xx_0_0 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: "mix I und Q \nto the mid \nfrequency\nspecified in\n\"mixf\"" + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: complex + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [288, 344.0] + rotation: 0 + state: enabled +- name: blocks_udp_source_0 + id: blocks_udp_source + parameters: + affinity: '' + alias: '' + comment: "receive an UDP data stream\nwith the bitrate of (see \ncomment samp_rate)\n\ + The stream is buffered, \nso send some bytes ahead\nto prefill the buffer\n\ + and avoid underrun" + eof: 'False' + ipaddr: 127.0.0.1 + maxoutbuf: '0' + minoutbuf: '0' + port: '40134' + psize: '258' + type: byte + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [736, 164.0] + rotation: 180 + state: true +- name: digital_constellation_modulator_0 + id: digital_constellation_modulator + parameters: + affinity: '' + alias: '' + comment: 'This modulator expects "Packed Bytes" + + which are 8 bits within one byte. + + The UDP source block deliveres bytes, + + so it fits perfectly.' + constellation: digital.constellation_8psk_natural().base() + differential: 'True' + excess_bw: '0.25' + log: 'False' + maxoutbuf: '0' + minoutbuf: '0' + samples_per_symbol: resamprate + verbose: 'False' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [272, 172.0] + rotation: 180 + state: enabled +- name: qtgui_freq_sink_x_0 + id: qtgui_freq_sink_x + parameters: + affinity: '' + alias: '' + alpha1: '1.0' + alpha10: '1.0' + alpha2: '1.0' + alpha3: '1.0' + alpha4: '1.0' + alpha5: '1.0' + alpha6: '1.0' + alpha7: '1.0' + alpha8: '1.0' + alpha9: '1.0' + autoscale: 'False' + average: '0.1' + axislabels: 'True' + bw: samp_rate + color1: '"blue"' + color10: '"dark blue"' + color2: '"red"' + color3: '"green"' + color4: '"black"' + color5: '"cyan"' + color6: '"magenta"' + color7: '"yellow"' + color8: '"dark red"' + color9: '"dark green"' + comment: '' + ctrlpanel: 'False' + fc: '0' + fftsize: '1024' + freqhalf: 'False' + grid: 'True' + gui_hint: '' + label: Relative Gain + label1: '' + label10: '''''' + label2: '''''' + label3: '''''' + label4: '''''' + label5: '''''' + label6: '''''' + label7: '''''' + label8: '''''' + label9: '''''' + legend: 'True' + maxoutbuf: '0' + minoutbuf: '0' + name: '""' + nconnections: '1' + showports: 'False' + tr_chan: '0' + tr_level: '0.0' + tr_mode: qtgui.TRIG_MODE_FREE + tr_tag: '""' + type: float + units: dB + update_time: '0.10' + width1: '1' + width10: '1' + width2: '1' + width3: '1' + width4: '1' + width5: '1' + width6: '1' + width7: '1' + width8: '1' + width9: '1' + wintype: firdes.WIN_BLACKMAN_hARRIS + ymax: '10' + ymin: '-140' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [552, 552.0] + rotation: 0 + state: disabled +- name: resamprate + id: parameter + parameters: + alias: '' + comment: '' + hide: none + label: resamprate + short_id: r + type: intx + value: '24' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [808, 12.0] + rotation: 0 + state: true +- name: samp_rate + id: parameter + parameters: + alias: '' + comment: Audio Rate + hide: none + label: samp_rate + short_id: s + type: intx + value: '48000' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [696, 12.0] + rotation: 0 + state: true + +connections: +- [analog_sig_source_x_0_0_0, '0', blocks_multiply_xx_0_0, '1'] +- [blocks_add_xx_0, '0', blocks_multiply_const_vxx_0, '0'] +- [blocks_add_xx_0, '0', qtgui_freq_sink_x_0, '0'] +- [blocks_complex_to_float_1, '0', blocks_add_xx_0, '0'] +- [blocks_complex_to_float_1, '1', blocks_add_xx_0, '1'] +- [blocks_multiply_const_vxx_0, '0', audio_sink_0_0, '0'] +- [blocks_multiply_xx_0_0, '0', blocks_complex_to_float_1, '0'] +- [blocks_udp_source_0, '0', digital_constellation_modulator_0, '0'] +- [digital_constellation_modulator_0, '0', blocks_multiply_xx_0_0, '0'] + +metadata: + file_format: 1 diff --git a/grc/rxuniversal_qpsk_nogui.grc b/grc/rxuniversal_qpsk_nogui.grc new file mode 100644 index 0000000..ba7b63a --- /dev/null +++ b/grc/rxuniversal_qpsk_nogui.grc @@ -0,0 +1,973 @@ +options: + parameters: + author: DJ0ABR + category: '[GRC Hier Blocks]' + cmake_opt: '' + comment: 'send and receive a datastream + + with 3500 bit/s via a QO-100 + + SSB channel with 2700 Hz bandwidth + + works with Gnu Radio 3.8.xxx ONLY + + does not work with 3.7.x' + copyright: '' + description: works with Gnu Radio 3.8.xxx + gen_cmake: 'Off' + gen_linking: dynamic + generate_options: no_gui + hier_block_src_path: '.:' + id: qpsk_rx + max_nouts: '0' + output_language: python + placement: (0,0) + qt_qss_theme: '' + realtime_scheduling: '' + run: 'True' + run_command: '{python} -u {filename}' + run_options: run + sizing_mode: fixed + thread_safe_setters: '' + title: QPSK RX-Modem + window_size: '' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [8, 8] + rotation: 0 + state: enabled + +blocks: +- name: mixf + id: variable + parameters: + comment: 'mid frequency + + in the audio + + spectrum. Set to get + + lowest and highest + + frequency within the + + transceiver filter range.' + value: '1500' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [328, 12.0] + rotation: 0 + state: enabled +- name: nfilts + id: variable + parameters: + comment: '' + value: '32' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [704, 12.0] + rotation: 0 + state: enabled +- name: outputsps + id: variable + parameters: + comment: 'Samples/Symbol + + fixed value, + + do not change. + + Used to adjust + + bitrate vs. bandwidth' + value: '7' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [8, 212.0] + rotation: 0 + state: enabled +- name: qpsk__constellation + id: variable_constellation_rect + parameters: + comment: '' + const_points: '[0.707+0.707j, -0.707+0.707j, -0.707-0.707j, 0.707-0.707j]' + imag_sect: '2' + precision: '8' + real_sect: '2' + rot_sym: '4' + soft_dec_lut: '''auto''' + sym_map: '[0, 1, 2, 3]' + w_imag_sect: '1' + w_real_sect: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [976, 12.0] + rotation: 0 + state: enabled +- name: sps + id: variable + parameters: + comment: 'Resampling Rate + + of the Polyphase + + Clock Sync and its filter' + value: '4' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [568, 12.0] + rotation: 0 + state: enabled +- name: analog_agc2_xx_0_0 + id: analog_agc2_xx + parameters: + affinity: '' + alias: '' + attack_rate: '0.01' + comment: 'Costas loop needs AGC + + loop gain depends on input level' + decay_rate: '0.2' + gain: '1' + max_gain: '3' + maxoutbuf: '0' + minoutbuf: '0' + reference: '1' + type: complex + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [544, 636.0] + rotation: 0 + state: enabled +- name: analog_const_source_x_0 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: 'Marker to find the start + + of the values' + const: '1000' + maxoutbuf: '0' + minoutbuf: '0' + type: int + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [464, 788.0] + rotation: 180 + state: true +- name: analog_const_source_x_0_0 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: 'Marker to find the start + + of the values' + const: '1000' + maxoutbuf: '0' + minoutbuf: '0' + type: int + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [536, 948.0] + rotation: 180 + state: true +- name: analog_const_source_x_0_0_0 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: 'Marker to find the start + + of the values' + const: '0' + maxoutbuf: '0' + minoutbuf: '0' + type: float + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [824, 1140.0] + rotation: 180 + state: disabled +- name: analog_const_source_x_0_1 + id: analog_const_source_x + parameters: + affinity: '' + alias: '' + comment: '' + const: '16777216' + maxoutbuf: '0' + minoutbuf: '0' + type: float + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [792, 916.0] + rotation: 180 + state: true +- name: analog_sig_source_x_0_0_0 + id: analog_sig_source_x + parameters: + affinity: '' + alias: '' + amp: '1' + comment: 'the modulator output is in the baseband at 0 Hz. + + Mix it with the required audio mid frequency. + + cos and -sin are used to combine I and Q + + into the frinal signal. + + Use it als for RX in the reverse direction' + freq: mixf + maxoutbuf: '0' + minoutbuf: '0' + offset: '0' + phase: '0' + samp_rate: samp_rate + type: complex + waveform: analog.GR_COS_WAVE + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [200, 220.0] + rotation: 0 + state: enabled +- name: analog_sig_source_x_1 + id: analog_sig_source_x + parameters: + affinity: '' + alias: '' + amp: '1' + comment: "Markers for the \nFrequ.Sink" + freq: '1500' + maxoutbuf: '0' + minoutbuf: '0' + offset: '0' + phase: '0' + samp_rate: samp_rate + type: float + waveform: analog.GR_COS_WAVE + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [712, 140.0] + rotation: 0 + state: disabled +- name: analog_sig_source_x_1_0 + id: analog_sig_source_x + parameters: + affinity: '' + alias: '' + amp: '1' + comment: '' + freq: '3000' + maxoutbuf: '0' + minoutbuf: '0' + offset: '0' + phase: '0' + samp_rate: samp_rate + type: float + waveform: analog.GR_COS_WAVE + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [704, 300.0] + rotation: 0 + state: disabled +- name: audio_source_0 + id: audio_source + parameters: + affinity: '' + alias: '' + comment: get audio from transceiver + device_name: '' + maxoutbuf: '0' + minoutbuf: '0' + num_outputs: '1' + ok_to_block: 'True' + samp_rate: samp_rate + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1328, 468.0] + rotation: 180 + state: true +- name: blocks_complex_to_float_0 + id: blocks_complex_to_float + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [824, 1032.0] + rotation: 180 + state: enabled +- name: blocks_complex_to_float_1 + id: blocks_complex_to_float + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [456, 248.0] + rotation: 0 + state: enabled +- name: blocks_float_to_complex_0 + id: blocks_float_to_complex + parameters: + affinity: '' + alias: '' + comment: 'combile I and Q + + to complex signal' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [320, 464.0] + rotation: 180 + state: enabled +- name: blocks_float_to_int_0 + id: blocks_float_to_int + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + scale: '16777216' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [632, 1028.0] + rotation: 180 + state: true +- name: blocks_float_to_int_0_0 + id: blocks_float_to_int + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + scale: '16777216' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [632, 1076.0] + rotation: 180 + state: true +- name: blocks_float_to_int_0_1 + id: blocks_float_to_int + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + scale: '1' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [512, 868.0] + rotation: 180 + state: true +- name: blocks_interleave_0 + id: blocks_interleave + parameters: + affinity: '' + alias: '' + blocksize: '1' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + num_streams: '2' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [360, 872.0] + rotation: 180 + state: true +- name: blocks_interleave_0_0 + id: blocks_interleave + parameters: + affinity: '' + alias: '' + blocksize: '1' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + num_streams: '3' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [320, 1000.0] + rotation: 180 + state: true +- name: blocks_multiply_xx_0_0_0 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: make I + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [520, 408.0] + rotation: 180 + state: enabled +- name: blocks_multiply_xx_0_1 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: make Q + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [560, 512.0] + rotation: 180 + state: enabled +- name: blocks_multiply_xx_0_1_0 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [640, 856.0] + rotation: 180 + state: enabled +- name: blocks_udp_sink_0 + id: blocks_udp_sink + parameters: + affinity: '' + alias: '' + comment: 'send RX data to UDP + + port 1235 on the local machine' + eof: 'False' + ipaddr: 127.0.0.1 + port: '40135' + psize: '344' + type: byte + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1240, 588.0] + rotation: 0 + state: true +- name: blocks_udp_sink_0_0 + id: blocks_udp_sink + parameters: + affinity: '' + alias: '' + comment: 'send QPSK Constellation data to UDP + + port 1236 on the local machine' + eof: 'False' + ipaddr: 127.0.0.1 + port: '40136' + psize: '120' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [64, 860.0] + rotation: 180 + state: enabled +- name: blocks_udp_sink_0_0_0 + id: blocks_udp_sink + parameters: + affinity: '' + alias: '' + comment: 'send QPSK Constellation data to UDP + + port 1236 on the local machine' + eof: 'False' + ipaddr: 127.0.0.1 + port: '40137' + psize: '120' + type: int + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [64, 988.0] + rotation: 180 + state: enabled +- name: digital_constellation_decoder_cb_0 + id: digital_constellation_decoder_cb + parameters: + affinity: '' + alias: '' + comment: 'QPSK decoding, same + + parameters as modulator' + constellation: qpsk__constellation + maxoutbuf: '0' + minoutbuf: '0' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [992, 612.0] + rotation: 0 + state: enabled +- name: digital_costas_loop_cc_0 + id: digital_costas_loop_cc + parameters: + affinity: '' + alias: '' + comment: 'locks the signal and + + converts into baseband' + maxoutbuf: '0' + minoutbuf: '0' + order: '4' + use_snr: 'False' + w: '0.06' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [744, 616.0] + rotation: 0 + state: enabled +- name: digital_lms_dd_equalizer_cc_0 + id: digital_lms_dd_equalizer_cc + parameters: + affinity: '' + alias: '' + cnst: qpsk__constellation + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + mu: '0.01' + num_taps: '15' + sps: outputsps + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [288, 628.0] + rotation: 0 + state: enabled +- name: digital_pfb_clock_sync_xxx_0 + id: digital_pfb_clock_sync_xxx + parameters: + affinity: '' + alias: '' + comment: 'synchronize the Clock, + + works very well with drifting + + QO-100 signal' + filter_size: nfilts + init_phase: nfilts/2 + loop_bw: '0.1' + max_dev: '1.5' + maxoutbuf: '0' + minoutbuf: '0' + osps: outputsps + sps: sps + taps: firdes.root_raised_cosine(nfilts, nfilts, 1.0/float(sps), 0.35, 11*sps*nfilts) + type: ccf + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [64, 652.0] + rotation: 0 + state: enabled +- name: low_pass_filter_0 + id: low_pass_filter + parameters: + affinity: '' + alias: '' + beta: '6.76' + comment: 'Anti-Aliasing filter + + Level correction + + and decimation' + cutoff_freq: '3500' + decim: '1' + gain: '8' + interp: '1' + maxoutbuf: '0' + minoutbuf: '0' + samp_rate: samp_rate + type: fir_filter_fff + width: '3100' + win: firdes.WIN_HAMMING + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [1048, 412.0] + rotation: 180 + state: enabled +- name: mmse_resampler_xx_0 + id: mmse_resampler_xx + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + phase_shift: '0' + resamp_ratio: samp_rate / 8000 + type: float + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [776, 768.0] + rotation: 180 + state: true +- name: mmse_resampler_xx_1 + id: mmse_resampler_xx + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + phase_shift: '0' + resamp_ratio: resamp + type: complex + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [72, 480.0] + rotation: 180 + state: true +- name: qtgui_const_sink_x_0 + id: qtgui_const_sink_x + parameters: + affinity: '' + alias: '' + alpha1: '1.0' + alpha10: '1.0' + alpha2: '1.0' + alpha3: '1.0' + alpha4: '1.0' + alpha5: '1.0' + alpha6: '1.0' + alpha7: '1.0' + alpha8: '1.0' + alpha9: '1.0' + autoscale: 'False' + axislabels: 'True' + color1: '"blue"' + color10: '"red"' + color2: '"red"' + color3: '"red"' + color4: '"red"' + color5: '"red"' + color6: '"red"' + color7: '"red"' + color8: '"red"' + color9: '"red"' + comment: '' + grid: 'False' + gui_hint: '' + label1: '' + label10: '' + label2: '' + label3: '' + label4: '' + label5: '' + label6: '' + label7: '' + label8: '' + label9: '' + legend: 'True' + marker1: '0' + marker10: '0' + marker2: '0' + marker3: '0' + marker4: '0' + marker5: '0' + marker6: '0' + marker7: '0' + marker8: '0' + marker9: '0' + name: '""' + nconnections: '2' + size: '1024' + style1: '0' + style10: '0' + style2: '0' + style3: '0' + style4: '0' + style5: '0' + style6: '0' + style7: '0' + style8: '0' + style9: '0' + tr_chan: '0' + tr_level: '0.0' + tr_mode: qtgui.TRIG_MODE_FREE + tr_slope: qtgui.TRIG_SLOPE_POS + tr_tag: '""' + type: complex + update_time: '0.10' + width1: '1' + width10: '1' + width2: '1' + width3: '1' + width4: '1' + width5: '1' + width6: '1' + width7: '1' + width8: '1' + width9: '1' + xmax: '2' + xmin: '-2' + ymax: '2' + ymin: '-2' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [768, 532.0] + rotation: 180 + state: disabled +- name: qtgui_freq_sink_x_1 + id: qtgui_freq_sink_x + parameters: + affinity: '' + alias: '' + alpha1: '1.0' + alpha10: '1.0' + alpha2: '1.0' + alpha3: '1.0' + alpha4: '1.0' + alpha5: '1.0' + alpha6: '1.0' + alpha7: '1.0' + alpha8: '1.0' + alpha9: '1.0' + autoscale: 'False' + average: '1.0' + axislabels: 'True' + bw: samp_rate + color1: '"blue"' + color10: '"dark blue"' + color2: '"red"' + color3: '"green"' + color4: '"black"' + color5: '"cyan"' + color6: '"magenta"' + color7: '"yellow"' + color8: '"dark red"' + color9: '"dark green"' + comment: '' + ctrlpanel: 'False' + fc: '0' + fftsize: '4096' + freqhalf: 'False' + grid: 'True' + gui_hint: '' + label: Relative Gain + label1: '' + label10: '''''' + label2: '''''' + label3: '''''' + label4: '''''' + label5: '''''' + label6: '''''' + label7: '''''' + label8: '''''' + label9: '''''' + legend: 'True' + maxoutbuf: '0' + minoutbuf: '0' + name: TX / RX Spectrum + nconnections: '3' + showports: 'False' + tr_chan: '0' + tr_level: '0.0' + tr_mode: qtgui.TRIG_MODE_FREE + tr_tag: '""' + type: float + units: dB + update_time: '.1' + width1: '1' + width10: '1' + width2: '1' + width3: '1' + width4: '1' + width5: '1' + width6: '1' + width7: '1' + width8: '1' + width9: '1' + wintype: firdes.WIN_BLACKMAN_hARRIS + ymax: '10' + ymin: '-140' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [992, 208.0] + rotation: 0 + state: disabled +- name: resamp + id: parameter + parameters: + alias: '' + comment: "Resampling Rate\nfrom Audio Rate\nto 8kS/s which is\nthe input of the\ + \ \nPolypashe Clock" + hide: none + label: resamp + short_id: r + type: intx + value: '5' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [464, 12.0] + rotation: 0 + state: true +- name: samp_rate + id: parameter + parameters: + alias: '' + comment: Audio Rate + hide: none + label: samp_rate + short_id: s + type: intx + value: '44100' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [216, 12.0] + rotation: 0 + state: true + +connections: +- [analog_agc2_xx_0_0, '0', digital_costas_loop_cc_0, '0'] +- [analog_const_source_x_0, '0', blocks_interleave_0, '0'] +- [analog_const_source_x_0_0, '0', blocks_interleave_0_0, '0'] +- [analog_const_source_x_0_0_0, '0', blocks_float_to_int_0_0, '0'] +- [analog_const_source_x_0_1, '0', blocks_multiply_xx_0_1_0, '1'] +- [analog_sig_source_x_0_0_0, '0', blocks_complex_to_float_1, '0'] +- [analog_sig_source_x_1, '0', qtgui_freq_sink_x_1, '1'] +- [analog_sig_source_x_1_0, '0', qtgui_freq_sink_x_1, '2'] +- [audio_source_0, '0', low_pass_filter_0, '0'] +- [audio_source_0, '0', mmse_resampler_xx_0, '0'] +- [blocks_complex_to_float_0, '0', blocks_float_to_int_0, '0'] +- [blocks_complex_to_float_0, '1', blocks_float_to_int_0_0, '0'] +- [blocks_complex_to_float_1, '0', blocks_multiply_xx_0_1, '1'] +- [blocks_complex_to_float_1, '1', blocks_multiply_xx_0_0_0, '1'] +- [blocks_float_to_complex_0, '0', mmse_resampler_xx_1, '0'] +- [blocks_float_to_int_0, '0', blocks_interleave_0_0, '1'] +- [blocks_float_to_int_0_0, '0', blocks_interleave_0_0, '2'] +- [blocks_float_to_int_0_1, '0', blocks_interleave_0, '1'] +- [blocks_interleave_0, '0', blocks_udp_sink_0_0, '0'] +- [blocks_interleave_0_0, '0', blocks_udp_sink_0_0_0, '0'] +- [blocks_multiply_xx_0_0_0, '0', blocks_float_to_complex_0, '0'] +- [blocks_multiply_xx_0_1, '0', blocks_float_to_complex_0, '1'] +- [blocks_multiply_xx_0_1_0, '0', blocks_float_to_int_0_1, '0'] +- [digital_constellation_decoder_cb_0, '0', blocks_udp_sink_0, '0'] +- [digital_costas_loop_cc_0, '0', blocks_complex_to_float_0, '0'] +- [digital_costas_loop_cc_0, '0', digital_constellation_decoder_cb_0, '0'] +- [digital_costas_loop_cc_0, '0', qtgui_const_sink_x_0, '0'] +- [digital_lms_dd_equalizer_cc_0, '0', analog_agc2_xx_0_0, '0'] +- [digital_lms_dd_equalizer_cc_0, '0', qtgui_const_sink_x_0, '1'] +- [digital_pfb_clock_sync_xxx_0, '0', digital_lms_dd_equalizer_cc_0, '0'] +- [low_pass_filter_0, '0', blocks_multiply_xx_0_0_0, '0'] +- [low_pass_filter_0, '0', blocks_multiply_xx_0_1, '0'] +- [low_pass_filter_0, '0', qtgui_freq_sink_x_1, '0'] +- [mmse_resampler_xx_0, '0', blocks_multiply_xx_0_1_0, '0'] +- [mmse_resampler_xx_1, '0', digital_pfb_clock_sync_xxx_0, '0'] + +metadata: + file_format: 1 diff --git a/grc/txuniversal_qpsk_nogui_ownmod.grc b/grc/txuniversal_qpsk_nogui_ownmod.grc new file mode 100644 index 0000000..0f24888 --- /dev/null +++ b/grc/txuniversal_qpsk_nogui_ownmod.grc @@ -0,0 +1,383 @@ +options: + parameters: + author: DJ0ABR + category: '[GRC Hier Blocks]' + cmake_opt: '' + comment: 'requires GNU Radio 3.8xxx + + does NOT work with 3.7x' + copyright: DJ0ABR + description: requires GNU Radio 3.8xxx + gen_cmake: 'On' + gen_linking: dynamic + generate_options: no_gui + hier_block_src_path: '.:' + id: qpsk_tx + max_nouts: '0' + output_language: python + placement: (0,0) + qt_qss_theme: '' + realtime_scheduling: '' + run: 'True' + run_command: '{python} -u {filename}' + run_options: run + sizing_mode: fixed + thread_safe_setters: '' + title: 'QPSK TX-Modem ' + window_size: '' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [8, 8] + rotation: 0 + state: enabled + +blocks: +- name: mixf + id: variable + parameters: + comment: 'mid frequency + + in the audio + + spectrum. Set to get + + lowest and highest + + frequency within the + + transceiver filter range.' + value: '1500' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [360, 12.0] + rotation: 0 + state: enabled +- name: qpsk__constellation + id: variable_constellation_rect + parameters: + comment: 'alternative: + + [0.707+0.707j, -0.707+0.707j, -0.707-0.707j, 0.707-0.707j] + + does not make a difference' + const_points: '[1+1j, -1+1j, -1-1j, 1-1j]' + imag_sect: '2' + precision: '8' + real_sect: '2' + rot_sym: '4' + soft_dec_lut: None + sym_map: '[0, 1, 2, 3]' + w_imag_sect: '1' + w_real_sect: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [8, 196.0] + rotation: 0 + state: enabled +- name: analog_sig_source_x_0_0_0 + id: analog_sig_source_x + parameters: + affinity: '' + alias: '' + amp: '1' + comment: 'the modulator output is in the baseband at 0 Hz. + + Mix it with the required audio mid frequency. + + cos and -sin are used to combine I and Q + + into the final signal.' + freq: mixf + maxoutbuf: '0' + minoutbuf: '0' + offset: '0' + phase: '0' + samp_rate: samp_rate + type: complex + waveform: analog.GR_COS_WAVE + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [96, 476.0] + rotation: 0 + state: enabled +- name: audio_sink_0_0 + id: audio_sink + parameters: + affinity: '' + alias: '' + comment: 'send audio to + + transceiver' + device_name: '' + num_inputs: '1' + ok_to_block: 'True' + samp_rate: samp_rate + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [928, 356.0] + rotation: 0 + state: enabled +- name: blocks_add_xx_0 + id: blocks_add_xx + parameters: + affinity: '' + alias: '' + comment: 'generate the analog + + output signal: USB + + (for LSB use substraction)' + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [696, 376.0] + rotation: 0 + state: true +- name: blocks_complex_to_float_1 + id: blocks_complex_to_float + parameters: + affinity: '' + alias: '' + comment: '' + maxoutbuf: '0' + minoutbuf: '0' + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [512, 376.0] + rotation: 0 + state: enabled +- name: blocks_multiply_const_vxx_0 + id: blocks_multiply_const_vxx + parameters: + affinity: '' + alias: '' + comment: 'reduce level for the + + audio output, improves + + linearity' + const: '0.05' + maxoutbuf: '0' + minoutbuf: '0' + type: float + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [784, 356.0] + rotation: 0 + state: true +- name: blocks_multiply_xx_0_0 + id: blocks_multiply_xx + parameters: + affinity: '' + alias: '' + comment: "mix I und Q \nto the mid \nfrequency\nspecified in\n\"mixf\"" + maxoutbuf: '0' + minoutbuf: '0' + num_inputs: '2' + type: complex + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [400, 376.0] + rotation: 0 + state: enabled +- name: blocks_udp_source_0 + id: blocks_udp_source + parameters: + affinity: '' + alias: '' + comment: "receive an UDP data stream\nwith the bitrate of (see \ncomment samp_rate)\n\ + The stream is buffered, \nso send some bytes ahead\nto prefill the buffer\n\ + and avoid underrun" + eof: 'False' + ipaddr: 127.0.0.1 + maxoutbuf: '0' + minoutbuf: '0' + port: '40134' + psize: '258' + type: byte + vlen: '1' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [824, 148.0] + rotation: 180 + state: enabled +- name: digital_constellation_modulator_0 + id: digital_constellation_modulator + parameters: + affinity: '' + alias: '' + comment: 'unpack bytes to bits + + make symbols 2bits/sym + + make constellation' + constellation: qpsk__constellation + differential: 'False' + excess_bw: '0.35' + log: 'False' + maxoutbuf: '0' + minoutbuf: '0' + samples_per_symbol: resamprate + verbose: 'False' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [392, 180.0] + rotation: 180 + state: enabled +- name: qtgui_freq_sink_x_0 + id: qtgui_freq_sink_x + parameters: + affinity: '' + alias: '' + alpha1: '1.0' + alpha10: '1.0' + alpha2: '1.0' + alpha3: '1.0' + alpha4: '1.0' + alpha5: '1.0' + alpha6: '1.0' + alpha7: '1.0' + alpha8: '1.0' + alpha9: '1.0' + autoscale: 'False' + average: '0.1' + axislabels: 'True' + bw: samp_rate + color1: '"blue"' + color10: '"dark blue"' + color2: '"red"' + color3: '"green"' + color4: '"black"' + color5: '"cyan"' + color6: '"magenta"' + color7: '"yellow"' + color8: '"dark red"' + color9: '"dark green"' + comment: '' + ctrlpanel: 'False' + fc: '0' + fftsize: '1024' + freqhalf: 'False' + grid: 'True' + gui_hint: '' + label: Relative Gain + label1: '' + label10: '''''' + label2: '''''' + label3: '''''' + label4: '''''' + label5: '''''' + label6: '''''' + label7: '''''' + label8: '''''' + label9: '''''' + legend: 'True' + maxoutbuf: '0' + minoutbuf: '0' + name: '""' + nconnections: '1' + showports: 'False' + tr_chan: '0' + tr_level: '0.0' + tr_mode: qtgui.TRIG_MODE_FREE + tr_tag: '""' + type: float + units: dB + update_time: '0.10' + width1: '1' + width10: '1' + width2: '1' + width3: '1' + width4: '1' + width5: '1' + width6: '1' + width7: '1' + width8: '1' + width9: '1' + wintype: firdes.WIN_BLACKMAN_hARRIS + ymax: '10' + ymin: '-140' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [880, 576.0] + rotation: 0 + state: disabled +- name: resamprate + id: parameter + parameters: + alias: '' + comment: '' + hide: none + label: resamprate + short_id: r + type: intx + value: '20' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [776, 12.0] + rotation: 0 + state: true +- name: samp_rate + id: parameter + parameters: + alias: '' + comment: Audio Rate + hide: none + label: samp_rate + short_id: s + type: intx + value: '44100' + states: + bus_sink: false + bus_source: false + bus_structure: null + coordinate: [208, 12.0] + rotation: 0 + state: true + +connections: +- [analog_sig_source_x_0_0_0, '0', blocks_multiply_xx_0_0, '1'] +- [blocks_add_xx_0, '0', blocks_multiply_const_vxx_0, '0'] +- [blocks_add_xx_0, '0', qtgui_freq_sink_x_0, '0'] +- [blocks_complex_to_float_1, '0', blocks_add_xx_0, '0'] +- [blocks_complex_to_float_1, '1', blocks_add_xx_0, '1'] +- [blocks_multiply_const_vxx_0, '0', audio_sink_0_0, '0'] +- [blocks_multiply_xx_0_0, '0', blocks_complex_to_float_1, '0'] +- [blocks_udp_source_0, '0', digital_constellation_modulator_0, '0'] +- [digital_constellation_modulator_0, '0', blocks_multiply_xx_0_0, '0'] + +metadata: + file_format: 1 diff --git a/images/readme.txt b/images/readme.txt new file mode 100644 index 0000000..4119276 --- /dev/null +++ b/images/readme.txt @@ -0,0 +1 @@ +Images are available here: diff --git a/modem/Makefile b/modem/Makefile new file mode 100644 index 0000000..7a484d0 --- /dev/null +++ b/modem/Makefile @@ -0,0 +1,13 @@ +CFLAGS=-O3 -Wall +LDLIBS= -L. -lpthread -lfftw3 -lm -lzip +CC=c++ +PROGNAME=qo100modem +OBJ=qo100modem.o main_helper.o udp.o frame_packer.o scrambler.o crc16.o fec.o fft.o constellation.o arraysend.o + +all: qo100modem + +qo100modem: $(OBJ) + $(CC) -g -o $@ $^ $(LDFLAGS) $(LDLIBS) + +clean: + rm -f *.o qo100modem diff --git a/modem/arraysend.c b/modem/arraysend.c new file mode 100644 index 0000000..feb96c3 --- /dev/null +++ b/modem/arraysend.c @@ -0,0 +1,225 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" + +int AddHeader(uint8_t *data, int len, char *filename); +uint8_t *zipArray(uint8_t *data, int length, int *ziplen); + +#define ZIPPED +#define TXMAXSIZE 200000 +uint8_t TXarray[TXMAXSIZE]; +int txlen; // total length of TXarray +int txpos; // current position in TXarray +uint8_t txtype = 0; // file type (from GUI) +uint8_t filestat = 0; // 0=first frame, 1=next frame, 2=last frame + +/* +* start sending a named byte array +* data ... contents of the Byte array +* length ... length of the Byte array +* type ... type of the file (see statics) +* filename ... description of the file or its name which is send with the data +*/ + + +int arraySend(uint8_t *data, int length, uint8_t type, char *filename) +{ + if((length+55) >= TXMAXSIZE) + { + printf("file TOO long. Max is %d byte\n",TXMAXSIZE); + return 0; + } + + txtype = type; + txpos = 0; + filestat = 0; + + // if it is an ASCII, HTML or binary file, zip it + if(type == 3 || type == 4 || type == 5) + { + #ifdef ZIPPED + int ziplen = 0; + printf("orig len:%d\n",length); + uint8_t *zipdata = zipArray(data,length,&ziplen); + if(zipdata==NULL) return 0; + printf("zipped len:%d\n",ziplen); + // add a file header and copy to txdata for transmission + txlen = AddHeader(zipdata,ziplen,filename); + #else + txlen = AddHeader(data,length,filename); + #endif + printf("txlen:%d\n",txlen); + } + else + { + // add a file header and copy to txdata for transmission + txlen = AddHeader(data,length,filename); + } + + // marker, we are sending + setSending(1); + return 1; +} + +int AddHeader(uint8_t *data, int len, char *filename) +{ + // make a unique ID number for this file + // we simply calc the CRC16 of the filename + uint16_t fncrc = Crc16_messagecalc(CRC16FILE, (uint8_t *)filename,strlen(filename)); + + // create the file header + // 50 bytes ... Filename (or first 50 chars of the filename) + // 2 bytes .... CRC16 od the filename, this is used as a file ID + // 3 bytes .... size of file + + int flen = strlen(filename); + if (flen > 50) flen = 50; + memcpy(TXarray,filename,flen); + + TXarray[50] = (uint8_t)((fncrc >> 8)&0xff); + TXarray[51] = (uint8_t)(fncrc&0xff); + + TXarray[52] = len >> 16; + TXarray[53] = len >> 8; + TXarray[54] = len; + + memcpy(TXarray+55,data,len); + + return len+55; +} + +// called from main() in a loop +// sends an array if specified by arraySend(..) +void doArraySend() +{ + if(getSending() == 0) return; + + if(filestat == 0) + { + // send first frame + printf("Start Array Send %d\n",getSending()); + toGR_Preamble(); + if(txlen <= PAYLOADLEN) + { + // we just need to send one frame + printf("send last frame only\n"); + toGR_sendData(TXarray, txtype, 3); + toGR_sendData(TXarray, txtype, 3); + setSending(0); + } + else + { + printf("send first frame\n"); + // data is longer than one PAYLOAD + toGR_sendData(TXarray, txtype, filestat); + txpos += PAYLOADLEN; + filestat = 1; + } + return; + } + + if(filestat == 1) + { + // check if this is the last frame + int restlen = txlen - txpos; + if(restlen <= PAYLOADLEN) + { + // send as the last frame + printf("send last frame\n"); + toGR_sendData(TXarray+txpos, txtype, 2); + toGR_sendData(TXarray+txpos, txtype, 2); + setSending(0); // transmission complete + } + else + { + // additional frame follows + printf("send next frame\n"); + // from txdata send one chunk of length PAYLOADLEN + toGR_sendData(TXarray+txpos, txtype, filestat); + txpos += PAYLOADLEN; + } + return; + } +} + +// make _arraySending flag thread safe +// it is called from main() and from udp-RX +pthread_mutex_t as_crit_sec; +#define AS_LOCK pthread_mutex_lock(&as_crit_sec) +#define AS_UNLOCK pthread_mutex_unlock(&as_crit_sec) + +int __arraySending = 0; // 1 ... Array transmission in progress + +void setSending(uint8_t onoff) +{ + AS_LOCK; + __arraySending = onoff; + AS_UNLOCK; +} + +int getSending() +{ + int as; + AS_LOCK; + if(__arraySending != 0) + printf("__arraySending: %d\n",__arraySending); + as = __arraySending; + AS_UNLOCK; + return as; +} + +#define defaultTXzipFN "tmp.zip" + +uint8_t *zipArray(uint8_t *data, int length, int *ziplen) +{ + int err = 0; + unlink(defaultTXzipFN); // delete existing zip file + struct zip *zp = zip_open(defaultTXzipFN, ZIP_CREATE, &err); + + zip_source_t *s; + if ((s=zip_source_buffer(zp, data, length, 0)) == NULL || + zip_file_add(zp, "my2databuffer", s, ZIP_FL_ENC_UTF_8) < 0) + { + zip_source_free(s); + printf("error adding file: %s\n", zip_strerror(zp)); + return NULL; + } + + zip_close(zp); + + // zip file is done + // now read the file and return the buffer + #define TXMAXSIZE 200000 + static uint8_t ZIPdata[TXMAXSIZE]; + FILE *fp=fopen(defaultTXzipFN,"rb"); + if(fp) + { + *ziplen = fread(ZIPdata,1,TXMAXSIZE,fp); + fclose(fp); + return ZIPdata; + } + + return NULL; +} diff --git a/modem/constellation.c b/modem/constellation.c new file mode 100644 index 0000000..c59c361 --- /dev/null +++ b/modem/constellation.c @@ -0,0 +1,170 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" + +// functions for non-differential QPSK +// depending on the phase shift rotate a data blocks constellation + +//uint8_t headerbytes[HEADERLEN] = {0x53, 0xe1, 0xa6}; +// corresponds to these QPSK symbols: +// bits: 01010011 11100001 10100110 +// syms: 1 1 0 3 3 2 0 1 2 2 1 2 + +uint8_t rxbytebuf[UDPBLOCKLEN+100]; // +100 ... reserve, just to be sure + +uint8_t *convertQPSKSymToBytes(uint8_t *rxsymbols) +{ + int sidx = 0; + for(int i=0; i> 6) & 3; + syms[symidx++] = (bytes[i] >> 4) & 3; + syms[symidx++] = (bytes[i] >> 2) & 3; + syms[symidx++] = (bytes[i] >> 0) & 3; + } +} + +void rotateQPSKsyms(uint8_t *src, uint8_t *dst, int len) +{ + for(int i=0; i> 5) & 7; + syms[symidx++] = (bytes[0+i] >> 2) & 7; + syms[symidx++] = ((bytes[0+i] & 3) << 1) | ((bytes[1+i] >> 7) & 1); + syms[symidx++] = (bytes[1+i] >> 4) & 7; + syms[symidx++] = (bytes[1+i] >> 1) & 7; + syms[symidx++] = ((bytes[1+i] & 1) << 2) | ((bytes[2+i] >> 6) & 3); + syms[symidx++] = (bytes[2+i] >> 3) & 7; + syms[symidx++] = bytes[2+i] & 7; + } +} + +void rotate8PSKsyms(uint8_t *src, uint8_t *dst, int len) +{ + for(int i=0; i> 1; + rxbytebuf[i+1] = rxsymbols[sidx++] << 7; + rxbytebuf[i+1] |= rxsymbols[sidx++] << 4; + rxbytebuf[i+1] |= rxsymbols[sidx++] << 1; + rxbytebuf[i+1] |= rxsymbols[sidx] >> 2; + rxbytebuf[i+2] = rxsymbols[sidx++] << 6; + rxbytebuf[i+2] |= rxsymbols[sidx++] << 3; + rxbytebuf[i+2] |= rxsymbols[sidx++]; + } + return rxbytebuf; +} + +void shiftleft(uint8_t *data, int shiftnum, int len) +{ + for(int j=0; j=0; i--) + { + b1 = (data[i] & 0x80)>>7; + data[i] <<= 1; + data[i] |= b2; + b2 = b1; + } + } +} diff --git a/modem/crc16.c b/modem/crc16.c new file mode 100644 index 0000000..d182087 --- /dev/null +++ b/modem/crc16.c @@ -0,0 +1,83 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" + +// since we use a static crc register we need TWO separated registers +// for RX and TX to get it thread safe, no.2 is for file ID generation + +uint16_t reg16[3] = {0xffff,0xffff}; // shift register + +uint16_t Crc16_bytecalc(int rxtx, uint8_t byt) +{ + uint16_t polynom = 0x8408; // generator polynom + + for (int i = 0; i < 8; ++i) + { + if ((reg16[rxtx] & 1) != (byt & 1)) + reg16[rxtx] = (uint16_t)((reg16[rxtx] >> 1) ^ polynom); + else + reg16[rxtx] >>= 1; + byt >>= 1; + } + return reg16[rxtx]; +} + +uint16_t Crc16_messagecalc(int rxtx, uint8_t *data,int len) +{ + reg16[rxtx] = 0xffff; + for (int i = 0; i < len; i++) + reg16[rxtx] = Crc16_bytecalc(rxtx,data[i]); + return reg16[rxtx]; +} + +// ================================================================= + +uint32_t reg32[2] = {0xffffffff,0xffffffff}; // Shiftregister + +void crc32_bytecalc(int rxtx, unsigned char byte) +{ +int i; +uint32_t polynom = 0xEDB88320; // Generatorpolynom + + for (i=0; i<8; ++i) + { + if ((reg32[rxtx]&1) != (byte&1)) + reg32[rxtx] = (reg32[rxtx]>>1)^polynom; + else + reg32[rxtx] >>= 1; + byte >>= 1; + } +} + +uint32_t crc32_messagecalc(int rxtx, unsigned char *data, int len) +{ +int i; + + reg32[rxtx] = 0xffffffff; + for(i=0; i +#include +#include + +#include "fec/schifra_galois_field.hpp" +#include "fec/schifra_galois_field_polynomial.hpp" +#include "fec/schifra_sequential_root_generator_polynomial_creator.hpp" +#include "fec/schifra_reed_solomon_encoder.hpp" +#include "fec/schifra_reed_solomon_decoder.hpp" +#include "fec/schifra_reed_solomon_block.hpp" +#include "fec/schifra_error_processes.hpp" + +/* Finite Field Parameters */ +const std::size_t field_descriptor = 8; +const std::size_t generator_polynomial_index = 120; +const std::size_t generator_polynomial_root_count = FECLEN; + +/* Reed Solomon Code Parameters */ +const std::size_t code_length = FECBLOCKLEN; +const std::size_t fec_length = FECLEN; +const std::size_t data_length = code_length - fec_length; + +/* Instantiate Finite Field and Generator Polynomials */ +const schifra::galois::field field(field_descriptor, + schifra::galois::primitive_polynomial_size06, + schifra::galois::primitive_polynomial06); + +schifra::galois::field_polynomial generator_polynomial(field); + +/* Instantiate Encoder and Decoder (Codec) */ +typedef schifra::reed_solomon::encoder encoder_t; +typedef schifra::reed_solomon::decoder decoder_t; + + + + +int cfec_Reconstruct(uint8_t *darr, uint8_t *destination) +{ +schifra::reed_solomon::block rxblock; + + for(std::size_t i=0; i block; + + // fill payload into an FEC-block + for(std::size_t i=0; i + +typedef unsigned char gf; + +typedef struct { + unsigned long magic; + unsigned short k, n; /* parameters of the code */ + gf* enc_matrix; +} fec_t; + +#if defined(_MSC_VER) +// actually, some of the flavors (i.e. Enterprise) do support restrict +//#define restrict __restrict +#define restrict +#endif + +/** + * param k the number of blocks required to reconstruct + * param m the total number of blocks created + */ +fec_t* fec_new(unsigned short k, unsigned short m); +void fec_free(fec_t* p); + +/** + * @param inpkts the "primary blocks" i.e. the chunks of the input data + * @param fecs buffers into which the secondary blocks will be written + * @param block_nums the numbers of the desired check blocks (the id >= k) which fec_encode() will produce and store into the buffers of the fecs parameter + * @param num_block_nums the length of the block_nums array + * @param sz size of a packet in bytes + */ +void fec_encode(const fec_t* code, const gf** src, gf** fecs, size_t sz); + +/** + * @param inpkts an array of packets (size k); If a primary block, i, is present then it must be at index i. Secondary blocks can appear anywhere. + * @param outpkts an array of buffers into which the reconstructed output packets will be written (only packets which are not present in the inpkts input will be reconstructed and written to outpkts) + * @param index an array of the blocknums of the packets in inpkts + * @param sz size of a packet in bytes + */ +void fec_decode(const fec_t* code, const gf** inpkts, gf** outpkts, const unsigned* index, size_t sz); + +#if defined(_MSC_VER) +#define alloca _alloca +#else +#ifdef __GNUC__ +#ifndef alloca +#define alloca(x) __builtin_alloca(x) +#endif +#else +#include +#endif +#endif + +/** + * zfec -- fast forward error correction library with Python interface + * + * Copyright (C) 2007-2008 Allmydata, Inc. + * Author: Zooko Wilcox-O'Hearn + * + * This file is part of zfec. + * + * See README.rst for licensing information. + */ + +/* + * Much of this work is derived from the "fec" software by Luigi Rizzo, et + * al., the copyright notice and licence terms of which are included below + * for reference. + * + * fec.h -- forward error correction based on Vandermonde matrices + * 980614 + * (C) 1997-98 Luigi Rizzo (luigi@iet.unipi.it) + * + * Portions derived from code by Phil Karn (karn@ka9q.ampr.org), + * Robert Morelos-Zaragoza (robert@spectra.eng.hawaii.edu) and Hari + * Thirumoorthy (harit@spectra.eng.hawaii.edu), Aug 1995 + * + * Modifications by Dan Rubenstein (see Modifications.txt for + * their description. + * Modifications (C) 1998 Dan Rubenstein (drubenst@cs.umass.edu) + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A + * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, + * OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, + * OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR + * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT + * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY + * OF SUCH DAMAGE. + */ + diff --git a/modem/fec/schifra_crc.hpp b/modem/fec/schifra_crc.hpp new file mode 100644 index 0000000..62b1073 --- /dev/null +++ b/modem/fec/schifra_crc.hpp @@ -0,0 +1,172 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_CRC_HPP +#define INCLUDE_SCHIFRA_CRC_HPP + + +#include +#include + + +namespace schifra +{ + + class crc32 + { + public: + + typedef std::size_t crc32_t; + + crc32(const crc32_t& _key, const crc32_t& _state = 0x00) + : key(_key), + state(_state), + initial_state(_state) + { + initialize_crc32_table(); + } + + void reset() + { + state = initial_state; + } + + void update_1byte(const unsigned char data) + { + state = (state >> 8) ^ table[data]; + } + + void update(const unsigned char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(data[i]); + } + } + + void update(char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::string& data) + { + for (std::size_t i = 0; i < data.size(); ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::size_t& data) + { + update_1byte(static_cast((data ) & 0xFF)); + update_1byte(static_cast((data >> 8) & 0xFF)); + update_1byte(static_cast((data >> 16) & 0xFF)); + update_1byte(static_cast((data >> 24) & 0xFF)); + } + + crc32_t crc() + { + return state; + } + + private: + + crc32& operator=(const crc32&); + + void initialize_crc32_table() + { + for (std::size_t i = 0; i < 0xFF; ++i) + { + crc32_t reg = i; + + for (int j = 0; j < 0x08; ++j) + { + reg = ((reg & 1) ? (reg >> 1) ^ key : reg >> 1); + } + + table[i] = reg; + } + } + + protected: + + crc32_t key; + crc32_t state; + const crc32_t initial_state; + crc32_t table[256]; + }; + + class schifra_crc : public crc32 + { + public: + + schifra_crc(const crc32_t _key) + : crc32(_key,0xAAAAAAAA) + {} + + void update(const unsigned char& data) + { + state = ((state >> 8) ^ table[data]) ^ ((state << 8) ^ table[~data]); + } + + void update(const unsigned char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(data[i]); + } + } + + void update(const char data[], const std::size_t& count) + { + for (std::size_t i = 0; i < count; ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::string& data) + { + for (std::size_t i = 0; i < data.size(); ++i) + { + update_1byte(static_cast(data[i])); + } + } + + void update(const std::size_t& data) + { + update_1byte(static_cast((data ) & 0xFF)); + update_1byte(static_cast((data >> 8) & 0xFF)); + update_1byte(static_cast((data >> 16) & 0xFF)); + update_1byte(static_cast((data >> 24) & 0xFF)); + } + + }; + +} // namespace schifra + + +#endif diff --git a/modem/fec/schifra_ecc_traits.hpp b/modem/fec/schifra_ecc_traits.hpp new file mode 100644 index 0000000..879d056 --- /dev/null +++ b/modem/fec/schifra_ecc_traits.hpp @@ -0,0 +1,109 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_ECC_TRAITS_HPP +#define INCLUDE_SCHIFRA_ECC_TRAITS_HPP + + +namespace schifra +{ + namespace traits + { + + template struct symbol; + /* bits per symbol */ + template <> struct symbol< 3> { enum {size = 2}; }; + template <> struct symbol< 7> { enum {size = 3}; }; + template <> struct symbol< 15> { enum {size = 4}; }; + template <> struct symbol< 31> { enum {size = 5}; }; + template <> struct symbol< 63> { enum {size = 6}; }; + template <> struct symbol< 127> { enum {size = 7}; }; + template <> struct symbol< 255> { enum {size = 8}; }; + template <> struct symbol< 511> { enum {size = 9}; }; + template <> struct symbol< 1023> { enum {size = 10}; }; + template <> struct symbol< 2047> { enum {size = 11}; }; + template <> struct symbol< 4195> { enum {size = 12}; }; + template <> struct symbol< 8191> { enum {size = 13}; }; + template <> struct symbol<16383> { enum {size = 14}; }; + template <> struct symbol<32768> { enum {size = 15}; }; + template <> struct symbol<65535> { enum {size = 16}; }; + + /* Credits: Modern C++ Design - Andrei Alexandrescu */ + template class __static_assert__ + { + public: + + __static_assert__(...) {} + }; + + template <> class __static_assert__ {}; + template <> class __static_assert__; + + template + struct validate_reed_solomon_code_parameters + { + private: + + __static_assert__<(code_length > 0)> assertion1; + __static_assert__<(code_length > fec_length)> assertion2; + __static_assert__<(code_length > data_length)> assertion3; + __static_assert__<(code_length == fec_length + data_length)> assertion4; + }; + + template + struct validate_reed_solomon_block_parameters + { + private: + + __static_assert__<(code_length > 0)> assertion1; + __static_assert__<(code_length > fec_length)> assertion2; + __static_assert__<(code_length > data_length)> assertion3; + __static_assert__<(code_length == fec_length + data_length)> assertion4; + }; + + template + struct equivalent_encoder_decoder + { + private: + + __static_assert__<(Encoder::trait::code_length == Decoder::trait::code_length)> assertion1; + __static_assert__<(Encoder::trait::fec_length == Decoder::trait::fec_length) > assertion2; + __static_assert__<(Encoder::trait::data_length == Decoder::trait::data_length)> assertion3; + }; + + template + class reed_solomon_triat + { + public: + + typedef validate_reed_solomon_code_parameters vrscp; + + enum { code_length = code_length_ }; + enum { fec_length = fec_length_ }; + enum { data_length = data_length_ }; + }; + + } + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_erasure_channel.hpp b/modem/fec/schifra_erasure_channel.hpp new file mode 100644 index 0000000..194107a --- /dev/null +++ b/modem/fec/schifra_erasure_channel.hpp @@ -0,0 +1,256 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_ERASURE_CHANNEL_HPP +#define INCLUDE_SCHIFRA_ERASURE_CHANNEL_HPP + + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_reed_solomon_interleaving.hpp" +#include "schifra_utilities.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + inline void interleaved_stack_erasure_mapper(const std::vector& missing_row_index, + std::vector& erasure_row_list) + { + erasure_row_list.resize(block_length); + + for (std::size_t i = 0; i < block_length; ++i) + { + erasure_row_list[i].reserve(fec_length); + } + + for (std::size_t i = 0; i < missing_row_index.size(); ++i) + { + for (std::size_t j = 0; j < block_length; ++j) + { + erasure_row_list[j].push_back(missing_row_index[i]); + } + } + } + + template + inline bool erasure_channel_stack_encode(const encoder& encoder, + block (&output)[code_length]) + { + for (std::size_t i = 0; i < code_length; ++i) + { + if (!encoder.encode(output[i])) + { + std::cout << "erasure_channel_stack_encode() - Error: Failed to encode block[" << i <<"]" << std::endl; + + return false; + } + } + + interleave(output); + + return true; + } + + template + class erasure_code_decoder : public decoder + { + public: + + typedef decoder decoder_type; + typedef typename decoder_type::block_type block_type; + typedef std::vector polynomial_list_type; + + erasure_code_decoder(const galois::field& gfield, + const unsigned int& gen_initial_index) + : decoder(gfield, gen_initial_index) + { + for (std::size_t i = 0; i < code_length; ++i) + { + received_.push_back(galois::field_polynomial(decoder_type::field_, code_length - 1)); + syndrome_.push_back(galois::field_polynomial(decoder_type::field_)); + } + }; + + bool decode(block_type rsblock[code_length], const erasure_locations_t& erasure_list) const + { + if ( + (!decoder_type::decoder_valid_) || + (erasure_list.size() != fec_length) + ) + { + return false; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + decoder_type::load_message (received_[i], rsblock [i]); + decoder_type::compute_syndrome(received_[i], syndrome_[i]); + } + + erasure_locations_t erasure_locations; + decoder_type::prepare_erasure_list(erasure_locations,erasure_list); + + galois::field_polynomial gamma(galois::field_element(decoder_type::field_, 1)); + + decoder_type::compute_gamma(gamma,erasure_locations); + + std::vector gamma_roots; + + find_roots_in_data(gamma,gamma_roots); + + polynomial_list_type omega; + + for (std::size_t i = 0; i < code_length; ++i) + { + omega.push_back((gamma * syndrome_[i]) % fec_length); + } + + galois::field_polynomial gamma_derivative = gamma.derivative(); + + for (std::size_t i = 0; i < gamma_roots.size(); ++i) + { + int error_location = static_cast(gamma_roots[i]); + galois::field_symbol alpha_inverse = decoder_type::field_.alpha(error_location); + galois::field_element denominator = gamma_derivative(alpha_inverse); + + if (denominator == 0) + { + return false; + } + + for (std::size_t j = 0; j < code_length; ++j) + { + galois::field_element numerator = (omega[j](alpha_inverse) * decoder_type::root_exponent_table_[error_location]); + /* + A minor optimization can be made in the event the + numerator is equal to zero by not executing the + following line. + */ + rsblock[j][error_location - 1] ^= decoder_type::field_.div(numerator.poly(),denominator.poly()); + } + } + + return true; + } + + private: + + void find_roots_in_data(const galois::field_polynomial& poly, std::vector& root_list) const + { + /* + Chien Search, as described in parent, but only + for locations within the data range of the message. + */ + root_list.reserve(fec_length << 1); + root_list.resize(0); + + std::size_t polynomial_degree = poly.deg(); + std::size_t root_list_size = 0; + + for (int i = 1; i <= static_cast(data_length); ++i) + { + if (0 == poly(decoder_type::field_.alpha(i)).poly()) + { + root_list.push_back(i); + root_list_size++; + + if (root_list_size == polynomial_degree) + { + break; + } + } + } + } + + mutable polynomial_list_type received_; + mutable polynomial_list_type syndrome_; + + }; + + template + inline bool erasure_channel_stack_decode(const decoder& general_decoder, + const erasure_locations_t& missing_row_index, + block (&output)[code_length]) + { + if (missing_row_index.empty()) + { + return true; + } + + interleave(output); + + for (std::size_t i = 0; i < code_length; ++i) + { + if (!general_decoder.decode(output[i],missing_row_index)) + { + std::cout << "[2] erasure_channel_stack_decode() - Error: Failed to decode block[" << i <<"]" << std::endl; + + return false; + } + } + + return true; + } + + template + inline bool erasure_channel_stack_decode(const erasure_code_decoder& erasure_decoder, + const erasure_locations_t& missing_row_index, + block (&output)[code_length]) + { + /* + Note: 1. Missing row indicies must be unique. + 2. Missing row indicies must exist within + the stack's size. + 3. There will be NO errors in the rows (aka output) + 4. The information members of the blocks will + not be utilized. + There are NO exceptions to these rules! + */ + if (missing_row_index.empty()) + { + return true; + } + else if (missing_row_index.size() == fec_length) + { + interleave(output); + + return erasure_decoder.decode(output,missing_row_index); + } + else + return erasure_channel_stack_decode( + static_cast&>(erasure_decoder), + missing_row_index, + output); + } + + } // namespace reed_solomon + +} // namepsace schifra + + +#endif diff --git a/modem/fec/schifra_error_processes.hpp b/modem/fec/schifra_error_processes.hpp new file mode 100644 index 0000000..d2f61fe --- /dev/null +++ b/modem/fec/schifra_error_processes.hpp @@ -0,0 +1,602 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_ERROR_PROCESSES_HPP +#define INCLUDE_SCHIFRA_ERROR_PROCESSES_HPP + + +#include +#include +#include +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + template + inline void add_erasure_error(const std::size_t& position, reed_solomon::block& block) + { + block[position] = (~block[position]) & 0xFF; // Or one can simply equate to zero + } + + template + inline void add_error(const std::size_t& position, reed_solomon::block& block) + { + block[position] = (~block[position]) & 0xFF; + } + + template + inline void add_error_4bit_symbol(const std::size_t& position, reed_solomon::block& block) + { + block[position] = (~block[position]) & 0x0F; + } + + template + inline void corrupt_message_all_errors00(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + for (std::size_t i = 0; i < (fec_length >> 1); ++i) + { + add_error((start_position + scale * i) % code_length,rsblock); + } + } + + template + inline void corrupt_message_all_errors_wth_mask(reed_solomon::block& rsblock, + const std::size_t& start_position, + const int& mask, + const std::size_t& scale = 1) + { + for (std::size_t i = 0; i < (fec_length >> 1); ++i) + { + std::size_t position = (start_position + scale * i) % code_length; + rsblock[position] = (~rsblock[position]) & mask; + + } + } + + template + inline void corrupt_message_all_errors(schifra::reed_solomon::block& rsblock, + const std::size_t error_count, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + for (std::size_t i = 0; i < error_count; ++i) + { + add_error((start_position + scale * i) % code_length,rsblock); + } + } + + template + inline void corrupt_message_all_erasures00(reed_solomon::block& rsblock, + reed_solomon::erasure_locations_t& erasure_list, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + for (std::size_t i = 0; i < fec_length; ++i) + { + std::size_t error_position = (start_position + scale * i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + if (erasures[i] == 1) erasure_list.push_back(i); + } + } + + template + inline void corrupt_message_all_erasures(reed_solomon::block& rsblock, + reed_solomon::erasure_locations_t& erasure_list, + const std::size_t erasure_count, + const std::size_t& start_position, + const std::size_t& scale = 1) + { + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + for (std::size_t i = 0; i < erasure_count; ++i) + { + /* Note: Must make sure duplicate erasures are not added */ + std::size_t error_position = (start_position + scale * i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + if (erasures[i] == 1) erasure_list.push_back(i); + } + } + + namespace error_mode + { + enum type + { + errors_erasures, // Errors first then erasures + erasures_errors // Erasures first then errors + }; + } + + template + inline void corrupt_message_errors_erasures(reed_solomon::block& rsblock, + const error_mode::type& mode, + const std::size_t& start_position, + const std::size_t& erasure_count, + reed_solomon::erasure_locations_t& erasure_list, + const std::size_t between_space = 0) + { + std::size_t error_count = (fec_length - erasure_count) >> 1; + + if ((2 * error_count) + erasure_count > fec_length) + { + std::cout << "corrupt_message_errors_erasures() - ERROR Too many erasures and errors!" << std::endl; + std::cout << "Error Count: " << error_count << std::endl; + std::cout << "Erasure Count: " << error_count << std::endl; + + return; + } + + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + std::size_t error_position = 0; + + switch (mode) + { + case error_mode::erasures_errors : { + for (std::size_t i = 0; i < erasure_count; ++i) + { + error_position = (start_position + i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + + for (std::size_t i = 0; i < error_count; ++i) + { + error_position = (start_position + erasure_count + between_space + i) % code_length; + add_error(error_position,rsblock); + } + } + break; + + case error_mode::errors_erasures : { + for (std::size_t i = 0; i < error_count; ++i) + { + error_position = (start_position + i) % code_length; + add_error(error_position,rsblock); + } + + for (std::size_t i = 0; i < erasure_count; ++i) + { + error_position = (start_position + error_count + between_space + i) % code_length; + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + } + } + break; + } + + for (std::size_t i = 0; i < code_length; ++i) + { + if (erasures[i] == 1) erasure_list.push_back(i); + } + + } + + template + inline void corrupt_message_interleaved_errors_erasures(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& erasure_count, + reed_solomon::erasure_locations_t& erasure_list) + { + std::size_t error_count = (fec_length - erasure_count) >> 1; + + if ((2 * error_count) + erasure_count > fec_length) + { + std::cout << "corrupt_message_interleaved_errors_erasures() - [1] ERROR Too many erasures and errors!" << std::endl; + std::cout << "Error Count: " << error_count << std::endl; + std::cout << "Erasure Count: " << error_count << std::endl; + + return; + } + + std::size_t erasures[code_length]; + + for (std::size_t i = 0; i < code_length; ++i) erasures[i] = 0; + + std::size_t e = 0; + std::size_t s = 0; + std::size_t i = 0; + + while ((e < error_count) || (s < erasure_count) || (i < (error_count + erasure_count))) + { + std::size_t error_position = (start_position + i) % code_length; + + if (((i & 0x01) == 0) && (s < erasure_count)) + { + add_erasure_error(error_position,rsblock); + erasures[error_position] = 1; + s++; + } + else if (((i & 0x01) == 1) && (e < error_count)) + { + e++; + add_error(error_position,rsblock); + } + ++i; + } + + for (std::size_t j = 0; j < code_length; ++j) + { + if (erasures[j] == 1) erasure_list.push_back(j); + } + + if ((2 * e) + erasure_list.size() > fec_length) + { + std::cout << "corrupt_message_interleaved_errors_erasures() - [2] ERROR Too many erasures and errors!" << std::endl; + std::cout << "Error Count: " << error_count << std::endl; + std::cout << "Erasure Count: " << error_count << std::endl; + + return; + } + } + + namespace details + { + template + struct corrupt_message_all_errors_segmented_impl + { + static void process(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& distance_between_blocks = 1) + { + std::size_t block_1_error_count = (fec_length >> 2); + std::size_t block_2_error_count = (fec_length >> 1) - block_1_error_count; + + for (std::size_t i = 0; i < block_1_error_count; ++i) + { + add_error((start_position + i) % code_length,rsblock); + } + + std::size_t new_start_position = (start_position + (block_1_error_count)) + distance_between_blocks; + + for (std::size_t i = 0; i < block_2_error_count; ++i) + { + add_error((new_start_position + i) % code_length,rsblock); + } + } + }; + + template + struct corrupt_message_all_errors_segmented_impl + { + static void process(reed_solomon::block&, + const std::size_t&, const std::size_t&) + {} + }; + } + + template + inline void corrupt_message_all_errors_segmented(reed_solomon::block& rsblock, + const std::size_t& start_position, + const std::size_t& distance_between_blocks = 1) + { + details::corrupt_message_all_errors_segmented_impl 2)>:: + process(rsblock,start_position,distance_between_blocks); + } + + inline bool check_for_duplicate_erasures(const std::vector& erasure_list) + { + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + for (std::size_t j = i + 1; j < erasure_list.size(); ++j) + { + if (erasure_list[i] == erasure_list[j]) + { + return false; + } + } + } + + return true; + } + + inline void dump_erasure_list(const schifra::reed_solomon::erasure_locations_t& erasure_list) + { + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + std::cout << "[" << i << "," << erasure_list[i] << "] "; + } + + std::cout << std::endl; + } + + template + inline bool is_block_equivelent(const reed_solomon::block& rsblock, + const std::string& data, + const bool display = false, + const bool all_errors = false) + { + std::string::const_iterator it = data.begin(); + + bool error_found = false; + + for (std::size_t i = 0; i < code_length - fec_length; ++i, ++it) + { + if (static_cast(rsblock.data[i] & 0xFF) != (*it)) + { + error_found = true; + + if (display) + { + printf("is_block_equivelent() - Error at loc : %02d\td1: %02X\td2: %02X\n", + static_cast(i), + rsblock.data[i], + static_cast(*it)); + } + + if (!all_errors) + return false; + } + } + + return !error_found; + } + + template + inline bool are_blocks_equivelent(const reed_solomon::block& block1, + const reed_solomon::block& block2, + const std::size_t span = code_length, + const bool display = false, + const bool all_errors = false) + { + bool error_found = false; + + for (std::size_t i = 0; i < span; ++i) + { + if (block1[i] != block2[i]) + { + error_found = true; + + if (display) + { + printf("are_blocks_equivelent() - Error at loc : %02d\td1: %04X\td2: %04X\n", + static_cast(i), + block1[i], + block2[i]); + } + + if (!all_errors) + return false; + } + } + + return !error_found; + } + + template + inline bool block_stacks_equivelent(const reed_solomon::block block_stack1[stack_size], + const reed_solomon::block block_stack2[stack_size]) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + if (!are_blocks_equivelent(block_stack1[i],block_stack2[i])) + { + return false; + } + } + + return true; + } + + template + inline bool block_stacks_equivelent(const reed_solomon::data_block block_stack1[stack_size], + const reed_solomon::data_block block_stack2[stack_size]) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + for (std::size_t j = 0; j < block_length; ++j) + { + if (block_stack1[i][j] != block_stack2[i][j]) + { + return false; + } + } + } + + return true; + } + + inline void corrupt_file_with_burst_errors(const std::string& file_name, + const long& start_position, + const long& burst_length) + { + if (!schifra::fileio::file_exists(file_name)) + { + std::cout << "corrupt_file() - Error: " << file_name << " does not exist!" << std::endl; + return; + } + + if (static_cast(start_position + burst_length) >= schifra::fileio::file_size(file_name)) + { + std::cout << "corrupt_file() - Error: Burst error out of bounds." << std::endl; + return; + } + + std::vector data(burst_length); + + std::ifstream ifile(file_name.c_str(), std::ios::in | std::ios::binary); + + if (!ifile) + { + return; + } + + ifile.seekg(start_position,std::ios_base::beg); + ifile.read(&data[0],burst_length); + ifile.close(); + + for (long i = 0; i < burst_length; ++i) + { + data[i] = ~data[i]; + } + + std::ofstream ofile(file_name.c_str(), std::ios::in | std::ios::out | std::ios::binary); + + if (!ofile) + { + return; + } + + ofile.seekp(start_position,std::ios_base::beg); + ofile.write(&data[0],burst_length); + ofile.close(); + } + + static const std::size_t global_random_error_index[] = + { + 13, 170, 148, 66, 228, 208, 182, 92, + 4, 137, 97, 99, 237, 151, 15, 0, + 119, 243, 41, 222, 33, 211, 188, 5, + 44, 30, 210, 111, 54, 79, 61, 223, + 239, 149, 73, 115, 201, 234, 194, 62, + 147, 70, 19, 49, 72, 52, 164, 29, + 102, 225, 203, 153, 18, 205, 40, 217, + 165, 177, 166, 134, 236, 68, 231, 154, + 116, 136, 47, 240, 46, 89, 120, 183, + 242, 28, 161, 226, 241, 230, 10, 131, + 207, 132, 83, 171, 202, 195, 227, 206, + 112, 88, 90, 146, 117, 180, 26, 78, + 118, 254, 107, 110, 220, 7, 192, 187, + 31, 175, 127, 209, 32, 12, 84, 128, + 190, 156, 95, 105, 104, 246, 91, 215, + 219, 142, 36, 186, 247, 233, 167, 133, + 160, 16, 140, 169, 23, 96, 155, 235, + 179, 76, 253, 103, 238, 67, 35, 121, + 100, 27, 213, 58, 77, 248, 174, 39, + 214, 56, 42, 200, 106, 21, 129, 114, + 252, 113, 168, 53, 25, 216, 64, 232, + 81, 75, 2, 224, 250, 60, 135, 204, + 48, 196, 94, 63, 244, 191, 93, 126, + 138, 159, 9, 85, 249, 34, 185, 163, + 17, 65, 184, 82, 109, 172, 108, 69, + 150, 3, 20, 221, 162, 212, 152, 59, + 198, 74, 229, 55, 87, 178, 141, 199, + 57, 130, 80, 173, 101, 122, 144, 51, + 139, 11, 8, 125, 158, 124, 123, 37, + 14, 24, 22, 43, 197, 50, 98, 6, + 176, 251, 86, 218, 193, 71, 145, 1, + 45, 38, 189, 143, 245, 157, 181 + }; + + static const std::size_t error_index_size = sizeof(global_random_error_index) / sizeof(std::size_t); + + template + inline void corrupt_message_all_errors_at_index(schifra::reed_solomon::block& rsblock, + const std::size_t error_count, + const std::size_t& error_index_start_position, + const bool display_positions = false) + { + schifra::reed_solomon::block tmp_rsblock = rsblock; + + for (std::size_t i = 0; i < error_count; ++i) + { + std::size_t error_position = (global_random_error_index[(error_index_start_position + i) % error_index_size]) % code_length; + + add_error(error_position,rsblock); + + if (display_positions) + { + std::cout << "Error index: " << error_position << std::endl; + } + } + } + + template + inline void corrupt_message_all_errors_at_index(schifra::reed_solomon::block& rsblock, + const std::size_t error_count, + const std::size_t& error_index_start_position, + const std::vector& random_error_index, + const bool display_positions = false) + { + for (std::size_t i = 0; i < error_count; ++i) + { + std::size_t error_position = (random_error_index[(error_index_start_position + i) % random_error_index.size()]) % code_length; + + add_error(error_position,rsblock); + + if (display_positions) + { + std::cout << "Error index: " << error_position << std::endl; + } + } + } + + inline void generate_error_index(const std::size_t index_size, + std::vector& random_error_index, + std::size_t seed) + { + if (0 == seed) + { + seed = 0xA5A5A5A5; + } + + ::srand(static_cast(seed)); + + std::deque index_list; + + for (std::size_t i = 0; i < index_size; ++i) + { + index_list.push_back(i); + } + + random_error_index.reserve(index_size); + random_error_index.resize(0); + + while (!index_list.empty()) + { + // possibly the worst way of doing this. + std::size_t index = ::rand() % index_list.size(); + + random_error_index.push_back(index_list[index]); + index_list.erase(index_list.begin() + index); + } + } + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_fileio.hpp b/modem/fec/schifra_fileio.hpp new file mode 100644 index 0000000..00443a1 --- /dev/null +++ b/modem/fec/schifra_fileio.hpp @@ -0,0 +1,227 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_FILEIO_HPP +#define INCLUDE_SCHIFRA_FILEIO_HPP + + +#include +#include +#include +#include +#include + +#include "schifra_crc.hpp" + + +namespace schifra +{ + + namespace fileio + { + + inline void read_into_vector(const std::string& file_name, std::vector& buffer) + { + std::ifstream file(file_name.c_str()); + if (!file) return; + std::string line; + while (std::getline(file,line)) + { + buffer.push_back(line); + } + file.close(); + } + + inline void write_from_vector(const std::string& file_name, const std::vector& buffer) + { + std::ofstream file(file_name.c_str()); + if (!file) return; + std::ostream_iterator os(file,"\n"); + std::copy(buffer.begin(),buffer.end(), os); + file.close(); + } + + inline bool file_exists(const std::string& file_name) + { + std::ifstream file(file_name.c_str(), std::ios::binary); + return ((!file) ? false : true); + } + + inline std::size_t file_size(const std::string& file_name) + { + std::ifstream file(file_name.c_str(),std::ios::binary); + if (!file) return 0; + file.seekg (0, std::ios::end); + return static_cast(file.tellg()); + } + + inline void load_file(const std::string& file_name, std::string& buffer) + { + std::ifstream file(file_name.c_str(), std::ios::binary); + if (!file) return; + buffer.assign(std::istreambuf_iterator(file),std::istreambuf_iterator()); + file.close(); + } + + inline void load_file(const std::string& file_name, char** buffer, std::size_t& buffer_size) + { + std::ifstream in_stream(file_name.c_str(),std::ios::binary); + if (!in_stream) return; + buffer_size = file_size(file_name); + *buffer = new char[buffer_size]; + in_stream.read(*buffer,static_cast(buffer_size)); + in_stream.close(); + } + + inline void write_file(const std::string& file_name, const std::string& buffer) + { + std::ofstream file(file_name.c_str(),std::ios::binary); + file << buffer; + file.close(); + } + + inline void write_file(const std::string& file_name, char* buffer, const std::size_t& buffer_size) + { + std::ofstream out_stream(file_name.c_str(),std::ios::binary); + if (!out_stream) return; + out_stream.write(buffer,static_cast(buffer_size)); + out_stream.close(); + } + + inline bool copy_file(const std::string& src_file_name, const std::string& dest_file_name) + { + std::ifstream src_file(src_file_name.c_str(),std::ios::binary); + std::ofstream dest_file(dest_file_name.c_str(),std::ios::binary); + if (!src_file) return false; + if (!dest_file) return false; + + const std::size_t block_size = 1024; + char buffer[block_size]; + + std::size_t remaining_bytes = file_size(src_file_name); + + while (remaining_bytes >= block_size) + { + src_file.read(&buffer[0],static_cast(block_size)); + dest_file.write(&buffer[0],static_cast(block_size)); + remaining_bytes -= block_size; + } + + if (remaining_bytes > 0) + { + src_file.read(&buffer[0],static_cast(remaining_bytes)); + dest_file.write(&buffer[0],static_cast(remaining_bytes)); + remaining_bytes = 0; + } + + src_file.close(); + dest_file.close(); + + return true; + } + + inline bool files_identical(const std::string& file_name1, const std::string& file_name2) + { + std::ifstream file1(file_name1.c_str(),std::ios::binary); + std::ifstream file2(file_name2.c_str(),std::ios::binary); + if (!file1) return false; + if (!file2) return false; + if (file_size(file_name1) != file_size(file_name2)) return false; + + const std::size_t block_size = 1024; + char buffer1[block_size]; + char buffer2[block_size]; + + std::size_t remaining_bytes = file_size(file_name1); + + while (remaining_bytes >= block_size) + { + file1.read(&buffer1[0],static_cast(block_size)); + file2.read(&buffer2[0],static_cast(block_size)); + + for (std::size_t i = 0; i < block_size; ++i) + { + if (buffer1[i] != buffer2[i]) + { + return false; + } + } + + remaining_bytes -= block_size; + } + + if (remaining_bytes > 0) + { + file1.read(&buffer1[0],static_cast(remaining_bytes)); + file2.read(&buffer2[0],static_cast(remaining_bytes)); + + for (std::size_t i = 0; i < remaining_bytes; ++i) + { + if (buffer1[i] != buffer2[i]) + { + return false; + } + } + + remaining_bytes = 0; + } + + file1.close(); + file2.close(); + + return true; + } + + inline std::size_t file_crc(crc32& crc_module, const std::string& file_name) + { + std::ifstream file(file_name.c_str(),std::ios::binary); + if (!file) return 0; + + const std::size_t block_size = 1024; + char buffer[block_size]; + + std::size_t remaining_bytes = file_size(file_name); + + crc_module.reset(); + + while (remaining_bytes >= block_size) + { + file.read(&buffer[0],static_cast(block_size)); + crc_module.update(buffer,block_size); + remaining_bytes -= block_size; + } + + if (remaining_bytes > 0) + { + file.read(&buffer[0],static_cast(remaining_bytes)); + crc_module.update(buffer,remaining_bytes); + remaining_bytes = 0; + } + + return crc_module.crc(); + } + + } // namespace fileio + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_galois_field.hpp b/modem/fec/schifra_galois_field.hpp new file mode 100644 index 0000000..ec7ee3a --- /dev/null +++ b/modem/fec/schifra_galois_field.hpp @@ -0,0 +1,518 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_FIELD_HPP +#define INCLUDE_SCHIFRA_GALOIS_FIELD_HPP + + +#include +#include +#include +#include +#include + + +namespace schifra +{ + + namespace galois + { + + typedef int field_symbol; + const field_symbol GFERROR = -1; + + class field + { + public: + + field(const int pwr, const std::size_t primpoly_deg, const unsigned int* primitive_poly); + ~field(); + + bool operator==(const field& gf) const; + bool operator!=(const field& gf) const; + + inline field_symbol index(const field_symbol value) const + { + return index_of_[value]; + } + + inline field_symbol alpha(const field_symbol value) const + { + return alpha_to_[value]; + } + + inline unsigned int size() const + { + return field_size_; + } + + inline unsigned int pwr() const + { + return power_; + } + + inline unsigned int mask() const + { + return field_size_; + } + + inline field_symbol add(const field_symbol& a, const field_symbol& b) const + { + return (a ^ b); + } + + inline field_symbol sub(const field_symbol& a, const field_symbol& b) const + { + return (a ^ b); + } + + inline field_symbol normalize(field_symbol x) const + { + while (x < 0) + { + x += static_cast(field_size_); + } + + while (x >= static_cast(field_size_)) + { + x -= static_cast(field_size_); + x = (x >> power_) + (x & field_size_); + } + + return x; + } + + inline field_symbol mul(const field_symbol& a, const field_symbol& b) const + { + #if !defined(NO_GFLUT) + return mul_table_[a][b]; + #else + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] + index_of_[b])]; + #endif + } + + inline field_symbol div(const field_symbol& a, const field_symbol& b) const + { + #if !defined(NO_GFLUT) + return div_table_[a][b]; + #else + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] - index_of_[b] + field_size_)]; + #endif + } + + inline field_symbol exp(const field_symbol& a, int n) const + { + #if !defined(NO_GFLUT) + if (n >= 0) + return exp_table_[a][n & field_size_]; + else + { + while (n < 0) n += field_size_; + + return (n ? exp_table_[a][n] : 1); + } + #else + if (a != 0) + { + if (n < 0) + { + while (n < 0) n += field_size_; + return (n ? alpha_to_[normalize(index_of_[a] * n)] : 1); + } + else if (n) + return alpha_to_[normalize(index_of_[a] * static_cast(n))]; + else + return 1; + } + else + return 0; + #endif + } + + #ifdef LINEAR_EXP_LUT + inline field_symbol* const linear_exp(const field_symbol& a) const + { + #if !defined(NO_GFLUT) + static const field_symbol upper_bound = 2 * field_size_; + if ((a >= 0) && (a <= upper_bound)) + return linear_exp_table_[a]; + else + return reinterpret_cast(0); + #else + return reinterpret_cast(0); + #endif + } + #endif + + inline field_symbol inverse(const field_symbol& val) const + { + #if !defined(NO_GFLUT) + return mul_inverse_[val]; + #else + return alpha_to_[normalize(field_size_ - index_of_[val])]; + #endif + } + + inline unsigned int prim_poly_term(const unsigned int index) const + { + return prim_poly_[index]; + } + + friend std::ostream& operator << (std::ostream& os, const field& gf); + + private: + + field(); + field(const field& gfield); + field& operator=(const field& gfield); + + void generate_field(const unsigned int* prim_poly_); + field_symbol gen_mul (const field_symbol& a, const field_symbol& b) const; + field_symbol gen_div (const field_symbol& a, const field_symbol& b) const; + field_symbol gen_exp (const field_symbol& a, const std::size_t& n) const; + field_symbol gen_inverse (const field_symbol& val) const; + + std::size_t create_array(char buffer_[], + const std::size_t& length, + const std::size_t offset, + field_symbol** array); + + std::size_t create_2d_array(char buffer_[], + std::size_t row_cnt, std::size_t col_cnt, + const std::size_t offset, + field_symbol*** array); + unsigned int power_; + std::size_t prim_poly_deg_; + unsigned int field_size_; + unsigned int prim_poly_hash_; + unsigned int* prim_poly_; + field_symbol* alpha_to_; // aka exponential or anti-log + field_symbol* index_of_; // aka log + field_symbol* mul_inverse_; // multiplicative inverse + field_symbol** mul_table_; + field_symbol** div_table_; + field_symbol** exp_table_; + field_symbol** linear_exp_table_; + char* buffer_; + }; + + inline field::field(const int pwr, const std::size_t primpoly_deg, const unsigned int* primitive_poly) + : power_(pwr), + prim_poly_deg_(primpoly_deg), + field_size_((1 << power_) - 1) + { + alpha_to_ = new field_symbol [field_size_ + 1]; + index_of_ = new field_symbol [field_size_ + 1]; + + #if !defined(NO_GFLUT) + + #ifdef LINEAR_EXP_LUT + static const std::size_t buffer_size = ((6 * (field_size_ + 1) * (field_size_ + 1)) + ((field_size_ + 1) * 2)) * sizeof(field_symbol); + #else + static const std::size_t buffer_size = ((4 * (field_size_ + 1) * (field_size_ + 1)) + ((field_size_ + 1) * 2)) * sizeof(field_symbol); + #endif + + buffer_ = new char[buffer_size]; + std::size_t offset = 0; + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1),offset,&mul_table_); + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1),offset,&div_table_); + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1),offset,&exp_table_); + + #ifdef LINEAR_EXP_LUT + offset = create_2d_array(buffer_,(field_size_ + 1),(field_size_ + 1) * 2,offset,&linear_exp_table_); + #else + linear_exp_table_ = 0; + #endif + + offset = create_array(buffer_,(field_size_ + 1) * 2,offset,&mul_inverse_); + + #else + + buffer_ = 0; + mul_table_ = 0; + div_table_ = 0; + exp_table_ = 0; + mul_inverse_ = 0; + linear_exp_table_ = 0; + + #endif + + prim_poly_ = new unsigned int [prim_poly_deg_ + 1]; + + for (unsigned int i = 0; i < (prim_poly_deg_ + 1); ++i) + { + prim_poly_[i] = primitive_poly[i]; + } + + prim_poly_hash_ = 0xAAAAAAAA; + + for (std::size_t i = 0; i < (prim_poly_deg_ + 1); ++i) + { + prim_poly_hash_ += ((i & 1) == 0) ? ( (prim_poly_hash_ << 7) ^ primitive_poly[i] * (prim_poly_hash_ >> 3)) : + (~((prim_poly_hash_ << 11) + (primitive_poly[i] ^ (prim_poly_hash_ >> 5)))); + } + + generate_field(primitive_poly); + } + + inline field::~field() + { + if (0 != alpha_to_) { delete [] alpha_to_; alpha_to_ = 0; } + if (0 != index_of_) { delete [] index_of_; index_of_ = 0; } + if (0 != prim_poly_) { delete [] prim_poly_; prim_poly_ = 0; } + + #if !defined(NO_GFLUT) + + if (0 != mul_table_) { delete [] mul_table_; mul_table_ = 0; } + if (0 != div_table_) { delete [] div_table_; div_table_ = 0; } + if (0 != exp_table_) { delete [] exp_table_; exp_table_ = 0; } + + #ifdef LINEAR_EXP_LUT + if (0 != linear_exp_table_) { delete [] linear_exp_table_; linear_exp_table_ = 0; } + #endif + + if (0 != buffer_) { delete [] buffer_; buffer_ = 0; } + + #endif + } + + inline bool field::operator==(const field& gf) const + { + return ( + (this->power_ == gf.power_) && + (this->prim_poly_hash_ == gf.prim_poly_hash_) + ); + } + + inline bool field::operator!=(const field& gf) const + { + return !field::operator ==(gf); + } + + inline void field::generate_field(const unsigned int* prim_poly) + { + /* + Note: It is assumed that the degree of the primitive + polynomial will be equivelent to the m value as + in GF(2^m) + */ + + field_symbol mask = 1; + + alpha_to_[power_] = 0; + + for (field_symbol i = 0; i < static_cast(power_); ++i) + { + alpha_to_[i] = mask; + index_of_[alpha_to_[i]] = i; + + if (prim_poly[i] != 0) + { + alpha_to_[power_] ^= mask; + } + + mask <<= 1; + } + + index_of_[alpha_to_[power_]] = power_; + + mask >>= 1; + + for (field_symbol i = power_ + 1; i < static_cast(field_size_); ++i) + { + if (alpha_to_[i - 1] >= mask) + alpha_to_[i] = alpha_to_[power_] ^ ((alpha_to_[i - 1] ^ mask) << 1); + else + alpha_to_[i] = alpha_to_[i - 1] << 1; + + index_of_[alpha_to_[i]] = i; + } + + index_of_[0] = GFERROR; + alpha_to_[field_size_] = 1; + + #if !defined(NO_GFLUT) + + for (field_symbol i = 0; i < static_cast(field_size_ + 1); ++i) + { + for (field_symbol j = 0; j < static_cast(field_size_ + 1); ++j) + { + mul_table_[i][j] = gen_mul(i,j); + div_table_[i][j] = gen_div(i,j); + exp_table_[i][j] = gen_exp(i,j); + } + } + + #ifdef LINEAR_EXP_LUT + for (field_symbol i = 0; i < static_cast(field_size_ + 1); ++i) + { + for (int j = 0; j < static_cast(2 * field_size_); ++j) + { + linear_exp_table_[i][j] = gen_exp(i,j); + } + } + #endif + + for (field_symbol i = 0; i < static_cast(field_size_ + 1); ++i) + { + mul_inverse_[i] = gen_inverse(i); + mul_inverse_[i + (field_size_ + 1)] = mul_inverse_[i]; + } + + #endif + } + + inline field_symbol field::gen_mul(const field_symbol& a, const field_symbol& b) const + { + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] + index_of_[b])]; + } + + inline field_symbol field::gen_div(const field_symbol& a, const field_symbol& b) const + { + if ((a == 0) || (b == 0)) + return 0; + else + return alpha_to_[normalize(index_of_[a] - index_of_[b] + field_size_)]; + } + + inline field_symbol field::gen_exp(const field_symbol& a, const std::size_t& n) const + { + if (a != 0) + return ((n == 0) ? 1 : alpha_to_[normalize(index_of_[a] * static_cast(n))]); + else + return 0; + } + + inline field_symbol field::gen_inverse(const field_symbol& val) const + { + return alpha_to_[normalize(field_size_ - index_of_[val])]; + } + + inline std::size_t field::create_array(char buffer[], + const std::size_t& length, + const std::size_t offset, + field_symbol** array) + { + const std::size_t row_size = length * sizeof(field_symbol); + (*array) = new(buffer + offset)field_symbol[length]; + return row_size + offset; + } + + inline std::size_t field::create_2d_array(char buffer[], + std::size_t row_cnt, std::size_t col_cnt, + const std::size_t offset, + field_symbol*** array) + { + const std::size_t row_size = col_cnt * sizeof(field_symbol); + char* buffer__offset = buffer + offset; + (*array) = new field_symbol* [row_cnt]; + for (std::size_t i = 0; i < row_cnt; ++i) + { + (*array)[i] = new(buffer__offset + (i * row_size))field_symbol[col_cnt]; + } + return (row_cnt * row_size) + offset; + } + + inline std::ostream& operator << (std::ostream& os, const field& gf) + { + for (std::size_t i = 0; i < (gf.field_size_ + 1); ++i) + { + os << i << "\t" << gf.alpha_to_[i] << "\t" << gf.index_of_[i] << std::endl; + } + + return os; + } + + /* 1x^0 + 1x^1 + 0x^2 + 1x^3 */ + const unsigned int primitive_polynomial00[] = {1, 1, 0, 1}; + const unsigned int primitive_polynomial_size00 = 4; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 1x^4*/ + const unsigned int primitive_polynomial01[] = {1, 1, 0, 0, 1}; + const unsigned int primitive_polynomial_size01 = 5; + + /* 1x^0 + 0x^1 + 1x^2 + 0x^3 + 0x^4 + 1x^5 */ + const unsigned int primitive_polynomial02[] = {1, 0, 1, 0, 0, 1}; + const unsigned int primitive_polynomial_size02 = 6; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 0x^4 + 0x^5 + 1x^6 */ + const unsigned int primitive_polynomial03[] = {1, 1, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size03 = 7; + + /* 1x^0 + 0x^1 + 0x^2 + 1x^3 + 0x^4 + 0x^5 + 0x^6 + 1x^7 */ + const unsigned int primitive_polynomial04[] = {1, 0, 0, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size04 = 8; + + /* 1x^0 + 0x^1 + 1x^2 + 1x^3 + 1x^4 + 0x^5 + 0x^6 + 0x^7 + 1x^8 */ + const unsigned int primitive_polynomial05[] = {1, 0, 1, 1, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size05 = 9; + + /* 1x^0 + 1x^1 + 1x^2 + 0x^3 + 0x^4 + 0x^5 + 0x^6 + 1x^7 + 1x^8 */ + const unsigned int primitive_polynomial06[] = {1, 1, 1, 0, 0, 0, 0, 1, 1}; + const unsigned int primitive_polynomial_size06 = 9; + + /* 1x^0 + 0x^1 + 0x^2 + 0x^3 + 1x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 1x^9 */ + const unsigned int primitive_polynomial07[] = {1, 0, 0, 0, 1, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size07 = 10; + + /* 1x^0 + 0x^1 + 0x^2 + 1x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 1x^10 */ + const unsigned int primitive_polynomial08[] = {1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size08 = 11; + + /* 1x^0 + 0x^1 + 1x^2 + 0x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 1x^11 */ + const unsigned int primitive_polynomial09[] = {1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size09 = 12; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 1x^4 + 0x^5 + 1x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 1x^12 */ + const unsigned int primitive_polynomial10[] = {1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size10 = 13; + + /* 1x^0 + 1x^1 + 0x^2 + 1x^3 + 1x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 0x^12 + 1x^13 */ + const unsigned int primitive_polynomial11[] = {1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size11 = 14; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 0x^4 + 0x^5 + 1x^6 + 0x^7 + 0x^8 + 0x^9 + 1x^10 + 0x^11 + 0x^12 + 0x^13 + 1x^14 */ + const unsigned int primitive_polynomial12[] = {1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size12 = 15; + + /* 1x^0 + 1x^1 + 0x^2 + 0x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 0x^12 + 0x^13 + 0x^14 + 1x^15 */ + const unsigned int primitive_polynomial13[] = {1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size13 = 16; + + /* 1x^0 + 1x^1 + 0x^2 + 1x^3 + 0x^4 + 0x^5 + 0x^6 + 0x^7 + 0x^8 + 0x^9 + 0x^10 + 0x^11 + 1x^12 + 0x^13 + 0x^14 + 0x^15 + 1x^16 */ + const unsigned int primitive_polynomial14[] = {1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1}; + const unsigned int primitive_polynomial_size14 = 17; + + } // namespace galois + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_galois_field_element.hpp b/modem/fec/schifra_galois_field_element.hpp new file mode 100644 index 0000000..e6aa89b --- /dev/null +++ b/modem/fec/schifra_galois_field_element.hpp @@ -0,0 +1,277 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_FIELD_ELEMENT_HPP +#define INCLUDE_SCHIFRA_GALOIS_FIELD_ELEMENT_HPP + + +#include +#include + +#include "schifra_galois_field.hpp" + + +namespace schifra +{ + + namespace galois + { + + class field_element + { + public: + + field_element(const field& gfield) + : field_(gfield), + poly_value_(-1) + {} + + field_element(const field& gfield,const field_symbol& v) + : field_(const_cast(gfield)), + poly_value_(v) + {} + + field_element(const field_element& gfe) + : field_(const_cast(gfe.field_)), + poly_value_(gfe.poly_value_) + {} + + ~field_element() + {} + + inline field_element& operator = (const field_element& gfe) + { + if ((this != &gfe) && (&field_ == &gfe.field_)) + { + poly_value_ = gfe.poly_value_; + } + + return *this; + } + + inline field_element& operator = (const field_symbol& v) + { + poly_value_ = v & field_.size(); + return *this; + } + + inline field_element& operator += (const field_element& gfe) + { + poly_value_ ^= gfe.poly_value_; + return *this; + } + + inline field_element& operator += (const field_symbol& v) + { + poly_value_ ^= v; + return *this; + } + + inline field_element& operator -= (const field_element& gfe) + { + *this += gfe; + return *this; + } + + inline field_element& operator -= (const field_symbol& v) + { + *this += v; + return *this; + } + + inline field_element& operator *= (const field_element& gfe) + { + poly_value_ = field_.mul(poly_value_, gfe.poly_value_); + return *this; + } + + inline field_element& operator *= (const field_symbol& v) + { + poly_value_ = field_.mul(poly_value_, v); + return *this; + } + + inline field_element& operator /= (const field_element& gfe) + { + poly_value_ = field_.div(poly_value_, gfe.poly_value_); + return *this; + } + + inline field_element& operator /= (const field_symbol& v) + { + poly_value_ = field_.div(poly_value_, v); + return *this; + } + + inline field_element& operator ^= (const int& n) + { + poly_value_ = field_.exp(poly_value_,n); + return *this; + } + + inline bool operator == (const field_element& gfe) const + { + return ((field_ == gfe.field_) && (poly_value_ == gfe.poly_value_)); + } + + inline bool operator == (const field_symbol& v) const + { + return (poly_value_ == v); + } + + inline bool operator != (const field_element& gfe) const + { + return ((field_ != gfe.field_) || (poly_value_ != gfe.poly_value_)); + } + + inline bool operator != (const field_symbol& v) const + { + return (poly_value_ != v); + } + + inline bool operator < (const field_element& gfe) + { + return (poly_value_ < gfe.poly_value_); + } + + inline bool operator < (const field_symbol& v) + { + return (poly_value_ < v); + } + + inline bool operator > (const field_element& gfe) + { + return (poly_value_ > gfe.poly_value_); + } + + inline bool operator > (const field_symbol& v) + { + return (poly_value_ > v); + } + + inline field_symbol index() const + { + return field_.index(poly_value_); + } + + inline field_symbol poly() const + { + return poly_value_; + } + + inline field_symbol& poly() + { + return poly_value_; + } + + inline const field& galois_field() const + { + return field_; + } + + inline field_symbol inverse() const + { + return field_.inverse(poly_value_); + } + + inline void normalize() + { + poly_value_ &= field_.size(); + } + + friend std::ostream& operator << (std::ostream& os, const field_element& gfe); + + private: + + const field& field_; + field_symbol poly_value_; + + }; + + inline field_element operator + (const field_element& a, const field_element& b); + inline field_element operator - (const field_element& a, const field_element& b); + inline field_element operator * (const field_element& a, const field_element& b); + inline field_element operator * (const field_element& a, const field_symbol& b); + inline field_element operator * (const field_symbol& a, const field_element& b); + inline field_element operator / (const field_element& a, const field_element& b); + inline field_element operator ^ (const field_element& a, const int& b); + + inline std::ostream& operator << (std::ostream& os, const field_element& gfe) + { + os << gfe.poly_value_; + return os; + } + + inline field_element operator + (const field_element& a, const field_element& b) + { + field_element result = a; + result += b; + return result; + } + + inline field_element operator - (const field_element& a, const field_element& b) + { + field_element result = a; + result -= b; + return result; + } + + inline field_element operator * (const field_element& a, const field_element& b) + { + field_element result = a; + result *= b; + return result; + } + + inline field_element operator * (const field_element& a, const field_symbol& b) + { + field_element result = a; + result *= b; + return result; + } + + inline field_element operator * (const field_symbol& a, const field_element& b) + { + field_element result = b; + result *= a; + return result; + } + + inline field_element operator / (const field_element& a, const field_element& b) + { + field_element result = a; + result /= b; + return result; + } + + inline field_element operator ^ (const field_element& a, const int& b) + { + field_element result = a; + result ^= b; + return result; + } + + } // namespace galois + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_galois_field_polynomial.hpp b/modem/fec/schifra_galois_field_polynomial.hpp new file mode 100644 index 0000000..63ff7d1 --- /dev/null +++ b/modem/fec/schifra_galois_field_polynomial.hpp @@ -0,0 +1,839 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_FIELD_POLYNOMIAL_HPP +#define INCLUDE_SCHIFRA_GALOIS_FIELD_POLYNOMIAL_HPP + + +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" + + +namespace schifra +{ + + namespace galois + { + + class field_polynomial + { + public: + + field_polynomial(const field& gfield); + field_polynomial(const field& gfield, const unsigned int& degree); + field_polynomial(const field& gfield, const unsigned int& degree, const field_element element[]); + field_polynomial(const field_polynomial& polynomial); + field_polynomial(const field_element& gfe); + ~field_polynomial() {} + + bool valid() const; + int deg() const; + const field& galois_field() const; + void set_degree(const unsigned int& x); + void simplify(); + + field_polynomial& operator = (const field_polynomial& polynomial); + field_polynomial& operator = (const field_element& element); + field_polynomial& operator += (const field_polynomial& element); + field_polynomial& operator += (const field_element& element); + field_polynomial& operator -= (const field_polynomial& element); + field_polynomial& operator -= (const field_element& element); + field_polynomial& operator *= (const field_polynomial& polynomial); + field_polynomial& operator *= (const field_element& element); + field_polynomial& operator /= (const field_polynomial& divisor); + field_polynomial& operator /= (const field_element& element); + field_polynomial& operator %= (const field_polynomial& divisor); + field_polynomial& operator %= (const unsigned int& power); + field_polynomial& operator ^= (const unsigned int& n); + field_polynomial& operator <<= (const unsigned int& n); + field_polynomial& operator >>= (const unsigned int& n); + + field_element& operator[] (const std::size_t& term); + field_element operator() (const field_element& value); + field_element operator() (field_symbol value); + + const field_element& operator[](const std::size_t& term) const; + const field_element operator()(const field_element& value) const; + const field_element operator()(field_symbol value) const; + + bool operator==(const field_polynomial& polynomial) const; + bool operator!=(const field_polynomial& polynomial) const; + + bool monic() const; + + field_polynomial derivative() const; + + friend std::ostream& operator << (std::ostream& os, const field_polynomial& polynomial); + + private: + + typedef std::vector::iterator poly_iter; + typedef std::vector::const_iterator const_poly_iter; + + void simplify(field_polynomial& polynomial) const; + + field& field_; + std::vector poly_; + }; + + field_polynomial operator + (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator + (const field_polynomial& a, const field_element& b); + field_polynomial operator + (const field_element& a, const field_polynomial& b); + field_polynomial operator + (const field_polynomial& a, const field_symbol& b); + field_polynomial operator + (const field_symbol& a, const field_polynomial& b); + field_polynomial operator - (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator - (const field_polynomial& a, const field_element& b); + field_polynomial operator - (const field_element& a, const field_polynomial& b); + field_polynomial operator - (const field_polynomial& a, const field_symbol& b); + field_polynomial operator - (const field_symbol& a, const field_polynomial& b); + field_polynomial operator * (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator * (const field_element& a, const field_polynomial& b); + field_polynomial operator * (const field_polynomial& a, const field_element& b); + field_polynomial operator / (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator / (const field_polynomial& a, const field_element& b); + field_polynomial operator % (const field_polynomial& a, const field_polynomial& b); + field_polynomial operator % (const field_polynomial& a, const unsigned int& power); + field_polynomial operator ^ (const field_polynomial& a, const int& n); + field_polynomial operator <<(const field_polynomial& a, const unsigned int& n); + field_polynomial operator >>(const field_polynomial& a, const unsigned int& n); + field_polynomial gcd(const field_polynomial& a, const field_polynomial& b); + + inline field_polynomial::field_polynomial(const field& gfield) + : field_(const_cast(gfield)) + { + poly_.clear(); + poly_.reserve(256); + } + + inline field_polynomial::field_polynomial(const field& gfield, const unsigned int& degree) + : field_(const_cast(gfield)) + { + poly_.reserve(256); + poly_.resize(degree + 1,field_element(field_,0)); + } + + inline field_polynomial::field_polynomial(const field& gfield, const unsigned int& degree, const field_element element[]) + : field_(const_cast(gfield)) + { + poly_.reserve(256); + + if (element != NULL) + { + /* + It is assumed that element is an array of field elements + with size/element count of degree + 1. + */ + for (unsigned int i = 0; i <= degree; ++i) + { + poly_.push_back(element[i]); + } + } + else + poly_.resize(degree + 1, field_element(field_, 0)); + } + + inline field_polynomial::field_polynomial(const field_polynomial& polynomial) + : field_(const_cast(polynomial.field_)), + poly_ (polynomial.poly_) + {} + + inline field_polynomial::field_polynomial(const field_element& element) + : field_(const_cast(element.galois_field())) + { + poly_.resize(1,element); + } + + inline bool field_polynomial::valid() const + { + return (poly_.size() > 0); + } + + inline int field_polynomial::deg() const + { + return static_cast(poly_.size()) - 1; + } + + inline const field& field_polynomial::galois_field() const + { + return field_; + } + + inline void field_polynomial::set_degree(const unsigned int& x) + { + poly_.resize(x - 1,field_element(field_,0)); + } + + inline field_polynomial& field_polynomial::operator = (const field_polynomial& polynomial) + { + if ((this != &polynomial) && (&field_ == &(polynomial.field_))) + { + poly_ = polynomial.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator = (const field_element& element) + { + if (&field_ == &(element.galois_field())) + { + poly_.resize(1,element); + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator += (const field_polynomial& polynomial) + { + if (&field_ == &(polynomial.field_)) + { + if (poly_.size() < polynomial.poly_.size()) + { + const_poly_iter it0 = polynomial.poly_.begin(); + + for (poly_iter it1 = poly_.begin(); it1 != poly_.end(); ++it0, ++it1) + { + (*it1) += (*it0); + } + + while (it0 != polynomial.poly_.end()) + { + poly_.push_back(*it0); + ++it0; + } + } + else + { + poly_iter it0 = poly_.begin(); + + for (const_poly_iter it1 = polynomial.poly_.begin(); it1 != polynomial.poly_.end(); ++it0, ++it1) + { + (*it0) += (*it1); + } + } + + simplify(*this); + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator += (const field_element& element) + { + poly_[0] += element; + return *this; + } + + inline field_polynomial& field_polynomial::operator -= (const field_polynomial& element) + { + return (*this += element); + } + + inline field_polynomial& field_polynomial::operator -= (const field_element& element) + { + poly_[0] -= element; + return *this; + } + + inline field_polynomial& field_polynomial::operator *= (const field_polynomial& polynomial) + { + if (&field_ == &(polynomial.field_)) + { + field_polynomial product(field_,deg() + polynomial.deg() + 1); + + poly_iter result_it = product.poly_.begin(); + + for (poly_iter it0 = poly_.begin(); it0 != poly_.end(); ++it0) + { + poly_iter current_result_it = result_it; + + for (const_poly_iter it1 = polynomial.poly_.begin(); it1 != polynomial.poly_.end(); ++it1) + { + (*current_result_it) += (*it0) * (*it1); + ++current_result_it; + } + + ++result_it; + } + + simplify(product); + poly_ = product.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator *= (const field_element& element) + { + if (field_ == element.galois_field()) + { + for (poly_iter it = poly_.begin(); it != poly_.end(); ++it) + { + (*it) *= element; + } + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator /= (const field_polynomial& divisor) + { + if ( + (&field_ == &divisor.field_) && + (deg() >= divisor.deg()) && + (divisor.deg() >= 0) + ) + { + field_polynomial quotient (field_, deg() - divisor.deg() + 1); + field_polynomial remainder(field_, divisor.deg() - 1); + + for (int i = static_cast(deg()); i >= 0; i--) + { + if (i <= static_cast(quotient.deg())) + { + quotient[i] = remainder[remainder.deg()] / divisor[divisor.deg()]; + + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1] + (quotient[i] * divisor[j]); + } + + remainder[0] = poly_[i] + (quotient[i] * divisor[0]); + } + else + { + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1]; + } + + remainder[0] = poly_[i]; + } + } + + simplify(quotient); + poly_ = quotient.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator /= (const field_element& element) + { + if (field_ == element.galois_field()) + { + for (poly_iter it = poly_.begin(); it != poly_.end(); ++it) + { + (*it) /= element; + } + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator %= (const field_polynomial& divisor) + { + if ( + (field_ == divisor.field_) && + (deg() >= divisor.deg() ) && + (divisor.deg() >= 0 ) + ) + { + field_polynomial quotient (field_, deg() - divisor.deg() + 1); + field_polynomial remainder(field_, divisor.deg() - 1); + + for (int i = static_cast(deg()); i >= 0; i--) + { + if (i <= static_cast(quotient.deg())) + { + quotient[i] = remainder[remainder.deg()] / divisor[divisor.deg()]; + + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1] + (quotient[i] * divisor[j]); + } + + remainder[0] = poly_[i] + (quotient[i] * divisor[0]); + } + else + { + for (int j = static_cast(remainder.deg()); j > 0; --j) + { + remainder[j] = remainder[j - 1]; + } + + remainder[0] = poly_[i]; + } + } + + poly_ = remainder.poly_; + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator %= (const unsigned int& power) + { + if (poly_.size() >= power) + { + poly_.resize(power,field_element(field_,0)); + simplify(*this); + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator ^= (const unsigned int& n) + { + field_polynomial result = *this; + + for (std::size_t i = 0; i < n; ++i) + { + result *= *this; + } + + *this = result; + + return *this; + } + + inline field_polynomial& field_polynomial::operator <<= (const unsigned int& n) + { + if (poly_.size() > 0) + { + size_t initial_size = poly_.size(); + + poly_.resize(poly_.size() + n, field_element(field_,0)); + + for (size_t i = initial_size - 1; static_cast(i) >= 0; --i) + { + poly_[i + n] = poly_[i]; + } + + for (unsigned int i = 0; i < n; ++i) + { + poly_[i] = 0; + } + } + + return *this; + } + + inline field_polynomial& field_polynomial::operator >>= (const unsigned int& n) + { + if (n <= poly_.size()) + { + for (unsigned int i = 0; i <= deg() - n; ++i) + { + poly_[i] = poly_[i + n]; + } + + poly_.resize(poly_.size() - n,field_element(field_,0)); + } + else if (static_cast(n) >= (deg() + 1)) + { + poly_.resize(0,field_element(field_,0)); + } + + return *this; + } + + inline const field_element& field_polynomial::operator [] (const std::size_t& term) const + { + assert(term < poly_.size()); + return poly_[term]; + } + + inline field_element& field_polynomial::operator [] (const std::size_t& term) + { + assert(term < poly_.size()); + return poly_[term]; + } + + inline field_element field_polynomial::operator () (const field_element& value) + { + field_element result(field_,0); + + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + field_symbol value_poly_form = value.poly(); + + for (poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value_poly_form,i), (*it).poly()); + } + + result = total_sum; + } + + return result; + } + + inline const field_element field_polynomial::operator () (const field_element& value) const + { + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + field_symbol value_poly_form = value.poly(); + + for (const_poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value_poly_form,i), (*it).poly()); + } + + return field_element(field_,total_sum); + } + + return field_element(field_,0); + } + + inline field_element field_polynomial::operator () (field_symbol value) + { + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + + for (const_poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value,i), (*it).poly()); + } + + return field_element(field_,total_sum); + } + + return field_element(field_,0); + } + + inline const field_element field_polynomial::operator () (field_symbol value) const + { + if (!poly_.empty()) + { + int i = 0; + field_symbol total_sum = 0 ; + + for (const_poly_iter it = poly_.begin(); it != poly_.end(); ++it, ++i) + { + total_sum ^= field_.mul(field_.exp(value, i), (*it).poly()); + } + + return field_element(field_,total_sum); + } + + return field_element(field_,0); + } + + inline bool field_polynomial::operator == (const field_polynomial& polynomial) const + { + if (field_ == polynomial.field_) + { + if (poly_.size() != polynomial.poly_.size()) + return false; + else + { + const_poly_iter it0 = polynomial.poly_.begin(); + + for (const_poly_iter it1 = poly_.begin(); it1 != poly_.end(); ++it0, ++it1) + { + if ((*it0) != (*it1)) + return false; + } + + return true; + } + } + else + return false; + } + + inline bool field_polynomial::operator != (const field_polynomial& polynomial) const + { + return !(*this == polynomial); + } + + inline field_polynomial field_polynomial::derivative() const + { + if ((*this).poly_.size() > 1) + { + field_polynomial deriv(field_,deg()); + + const std::size_t upper_bound = poly_.size() - 1; + + for (std::size_t i = 0; i < upper_bound; i += 2) + { + deriv.poly_[i] = poly_[i + 1]; + } + + simplify(deriv); + return deriv; + } + + return field_polynomial(field_,0); + } + + inline bool field_polynomial::monic() const + { + return (poly_[poly_.size() - 1] == static_cast(1)); + } + + inline void field_polynomial::simplify() + { + simplify(*this); + } + + inline void field_polynomial::simplify(field_polynomial& polynomial) const + { + std::size_t poly_size = polynomial.poly_.size(); + + if ((poly_size > 0) && (polynomial.poly_.back() == 0)) + { + poly_iter it = polynomial.poly_.end (); + poly_iter begin = polynomial.poly_.begin(); + + std::size_t count = 0; + + while ((begin != it) && (*(--it) == 0)) + { + ++count; + } + + if (0 != count) + { + polynomial.poly_.resize(poly_size - count, field_element(field_,0)); + } + } + } + + inline field_polynomial operator + (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result += b; + return result; + } + + inline field_polynomial operator + (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result += b; + return result; + } + + inline field_polynomial operator + (const field_element& a, const field_polynomial& b) + { + field_polynomial result = b; + result += a; + return result; + } + + inline field_polynomial operator + (const field_polynomial& a, const field_symbol& b) + { + return a + field_element(a.galois_field(),b); + } + + inline field_polynomial operator + (const field_symbol& a, const field_polynomial& b) + { + return b + field_element(b.galois_field(),a); + } + + inline field_polynomial operator - (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result -= b; + return result; + } + + inline field_polynomial operator - (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result -= b; + return result; + } + + inline field_polynomial operator - (const field_element& a, const field_polynomial& b) + { + field_polynomial result = b; + result -= a; + return result; + } + + inline field_polynomial operator - (const field_polynomial& a, const field_symbol& b) + { + return a - field_element(a.galois_field(),b); + } + + inline field_polynomial operator - (const field_symbol& a, const field_polynomial& b) + { + return b - field_element(b.galois_field(),a); + } + + inline field_polynomial operator * (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result *= b; + return result; + } + + inline field_polynomial operator * (const field_element& a, const field_polynomial& b) + { + field_polynomial result = b; + result *= a; + return result; + } + + inline field_polynomial operator * (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result *= b; + return result; + } + + inline field_polynomial operator / (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result /= b; + return result; + } + + inline field_polynomial operator / (const field_polynomial& a, const field_element& b) + { + field_polynomial result = a; + result /= b; + return result; + } + + inline field_polynomial operator % (const field_polynomial& a, const field_polynomial& b) + { + field_polynomial result = a; + result %= b; + return result; + } + + inline field_polynomial operator % (const field_polynomial& a, const unsigned int& n) + { + field_polynomial result = a; + result %= n; + return result; + } + + inline field_polynomial operator ^ (const field_polynomial& a, const int& n) + { + field_polynomial result = a; + result ^= n; + return result; + } + + inline field_polynomial operator << (const field_polynomial& a, const unsigned int& n) + { + field_polynomial result = a; + result <<= n; + return result; + } + + inline field_polynomial operator >> (const field_polynomial& a, const unsigned int& n) + { + field_polynomial result = a; + result >>= n; + return result; + } + + inline field_polynomial gcd(const field_polynomial& a, const field_polynomial& b) + { + if (&a.galois_field() == &b.galois_field()) + { + if ((!a.valid()) && (!b.valid())) + { + field_polynomial error_polynomial(a.galois_field()); + return error_polynomial; + } + + if (!a.valid()) return b; + if (!b.valid()) return a; + + field_polynomial x = a % b; + field_polynomial y = b; + field_polynomial z = x; + + while ((z = (y % x)).valid()) + { + y = x; + x = z; + } + return x; + } + else + { + field_polynomial error_polynomial(a.galois_field()); + return error_polynomial; + } + } + + inline field_polynomial generate_X(const field& gfield) + { + const field_element xgfe[2] = { + galois::field_element(gfield, 0), + galois::field_element(gfield, 1) + }; + + field_polynomial X_(gfield,1,xgfe); + + return X_; + } + + inline std::ostream& operator << (std::ostream& os, const field_polynomial& polynomial) + { + if (polynomial.deg() >= 0) + { + /* + for (unsigned int i = 0; i < polynomial.poly_.size(); ++i) + { + os << polynomial.poly[i].index() + << ((i != (polynomial.deg())) ? " " : ""); + } + + std::cout << " poly form: "; + */ + + for (unsigned int i = 0; i < polynomial.poly_.size(); ++i) + { + os << polynomial.poly_[i].poly() + << " " + << "x^" + << i + << ((static_cast(i) != (polynomial.deg())) ? " + " : ""); + } + } + + return os; + } + + } // namespace galois + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_galois_utilities.hpp b/modem/fec/schifra_galois_utilities.hpp new file mode 100644 index 0000000..e3c9f3e --- /dev/null +++ b/modem/fec/schifra_galois_utilities.hpp @@ -0,0 +1,115 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_GALOIS_UTILITIES_HPP +#define INCLUDE_SCHIFRA_GALOIS_UTILITIES_HPP + + +#include +#include +#include +#include +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_polynomial.hpp" + + +namespace schifra +{ + + namespace galois + { + + inline std::string convert_to_string(const unsigned int& value, const unsigned int& width) + { + std::stringstream stream; + stream << std::setw(width) << std::setfill('0') << value; + return stream.str(); + } + + inline std::string convert_to_string(const int& value, const unsigned int& width) + { + std::stringstream stream; + stream << std::setw(width) << std::setfill('0') << value; + return stream.str(); + } + + inline std::string convert_to_bin(const unsigned int& value, const unsigned int& field_descriptor) + { + std::string output = std::string(field_descriptor, ' '); + + for (unsigned int i = 0; i < field_descriptor; ++i) + { + output[i] = ((((value >> (field_descriptor - 1 - i)) & 1) == 1) ? '1' : '0'); + } + + return output; + } + + inline void alpha_table(std::ostream& os, const field& gf) + { + std::vector str_list; + + for (unsigned int i = 0; i < gf.size() + 1; ++i) + { + str_list.push_back("alpha^" + convert_to_string(gf.index(i),2) + "\t" + + convert_to_bin (i,gf.pwr()) + "\t" + + convert_to_string(gf.alpha(i),2)); + } + + std::sort(str_list.begin(),str_list.end()); + std::copy(str_list.begin(),str_list.end(),std::ostream_iterator(os,"\n")); + } + + inline void polynomial_alpha_form(std::ostream& os, const field_polynomial& polynomial) + { + for (int i = 0; i < (polynomial.deg() + 1); ++i) + { + field_symbol alpha_power = polynomial.galois_field().index(polynomial[i].poly()); + + if (alpha_power != 0) + os << static_cast(224) << "^" << convert_to_string(alpha_power,2); + else + os << 1; + + os << " * " + << "x^" + << i + << ((i != (polynomial.deg())) ? " + " : ""); + } + } + + inline void polynomial_alpha_form(std::ostream& os, const std::string& prepend, const field_polynomial& polynomial) + { + os << prepend; + polynomial_alpha_form(os,polynomial); + os << std::endl; + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_bitio.hpp b/modem/fec/schifra_reed_solomon_bitio.hpp new file mode 100644 index 0000000..6130d47 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_bitio.hpp @@ -0,0 +1,201 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_BITIO_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_BITIO_HPP + + +#include + + +namespace schifra +{ + + namespace reed_solomon + { + + namespace bitio + { + + template class convert_data_to_symbol; + + template <> + class convert_data_to_symbol<2> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, s_it+=4) + { + (* s_it ) = (*d_it) & 0x03; + (*(s_it + 1)) = ((*d_it) >> 2) & 0x03; + (*(s_it + 2)) = ((*d_it) >> 4) & 0x03; + (*(s_it + 3)) = ((*d_it) >> 6) & 0x03; + } + } + }; + + template <> + class convert_data_to_symbol<4> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, s_it+=2) + { + (* s_it ) = (*d_it) & 0x0F; + (*(s_it + 1)) = ((*d_it) >> 4) & 0x0F; + } + } + }; + + template <> + class convert_data_to_symbol<8> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*s_it) = (*d_it) & 0xFF; + } + } + }; + + template <> + class convert_data_to_symbol<16> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + const BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; i+=2, d_it+=2, ++s_it) + { + (*s_it) = (*d_it) & 0x000000FF; + (*s_it) |= (static_cast((*(d_it + 1))) << 8) & 0x0000FF00; + } + } + }; + + template <> + class convert_data_to_symbol<24> + { + public: + + template + convert_data_to_symbol(const BitBlock data[], const std::size_t data_length, int symbol[]) + { + BitBlock* d_it = & data[0]; + int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; i+=3, d_it+=3, ++s_it) + { + (*s_it) |= (*d_it) & 0x000000FF; + (*s_it) |= (static_cast((*(d_it + 1))) << 8) & 0x0000FF00; + (*s_it) |= (static_cast((*(d_it + 2))) << 16) & 0x00FF0000; + } + } + }; + + template class convert_symbol_to_data; + + template <> + class convert_symbol_to_data<4> + { + public: + + template + convert_symbol_to_data(const int symbol[], BitBlock data[], const std::size_t data_length) + { + BitBlock* d_it = & data[0]; + const int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*d_it) = (*s_it) & 0x0000000F; + (*d_it) |= ((*(s_it + 1)) & 0x0000000F) << 4; + } + } + }; + + template <> + class convert_symbol_to_data<8> + { + public: + template + convert_symbol_to_data(const int symbol[], BitBlock data[], const std::size_t data_length) + { + BitBlock* d_it = & data[0]; + const int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*d_it) = static_cast((*s_it) & 0xFF); + } + } + }; + + template <> + class convert_symbol_to_data<16> + { + public: + + template + convert_symbol_to_data(const int symbol[], BitBlock data[], const std::size_t data_length) + { + BitBlock* d_it = & data[0]; + const int* s_it = &symbol[0]; + + for (std::size_t i = 0; i < data_length; ++i, ++d_it, ++s_it) + { + (*d_it) = (*s_it) & 0xFFFF; + } + } + }; + + } // namespace bitio + + } // namespace reed_solomon + +} // namespace schifra + + +#endif diff --git a/modem/fec/schifra_reed_solomon_block.hpp b/modem/fec/schifra_reed_solomon_block.hpp new file mode 100644 index 0000000..ec1852c --- /dev/null +++ b/modem/fec/schifra_reed_solomon_block.hpp @@ -0,0 +1,382 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_BLOCK_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_BLOCK_HPP + + +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + struct block + { + public: + + typedef galois::field_symbol symbol_type; + typedef traits::reed_solomon_triat trait; + typedef traits::symbol symbol; + typedef block block_t; + + enum error_t + { + e_no_error = 0, + e_encoder_error0 = 1, + e_encoder_error1 = 2, + e_decoder_error0 = 3, + e_decoder_error1 = 4, + e_decoder_error2 = 5, + e_decoder_error3 = 6, + e_decoder_error4 = 7 + }; + + block() + : errors_detected (0), + errors_corrected(0), + zero_numerators (0), + unrecoverable(false), + error(e_no_error) + { + traits::validate_reed_solomon_block_parameters(); + } + + block(const std::string& _data, const std::string& _fec) + : errors_detected (0), + errors_corrected(0), + zero_numerators (0), + unrecoverable(false), + error(e_no_error) + { + traits::validate_reed_solomon_block_parameters(); + + for (std::size_t i = 0; i < data_length; ++i) + { + data[i] = static_cast(_data[i]); + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + data[i + data_length] = static_cast(_fec[i]); + } + } + + galois::field_symbol& operator[](const std::size_t& index) + { + return data[index]; + } + + const galois::field_symbol& operator[](const std::size_t& index) const + { + return data[index]; + } + + galois::field_symbol& operator()(const std::size_t& index) + { + return operator[](index); + } + + galois::field_symbol& fec(const std::size_t& index) + { + return data[data_length + index]; + } + + bool data_to_string(std::string& data_str) const + { + if (data_str.length() != data_length) + { + return false; + } + + for (std::size_t i = 0; i < data_length; ++i) + { + data_str[i] = static_cast(data[i]); + } + + return true; + } + + bool fec_to_string(std::string& fec_str) const + { + if (fec_str.length() != fec_length) + { + return false; + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + fec_str[i] = static_cast(data[data_length + i]); + } + + return true; + } + + std::string fec_to_string() const + { + std::string fec_str(fec_length,0x00); + fec_to_string(fec_str); + return fec_str; + } + + void clear(galois::field_symbol value = 0) + { + for (std::size_t i = 0; i < code_length; ++i) + { + data[i] = value; + } + } + + void clear_data(galois::field_symbol value = 0) + { + for (std::size_t i = 0; i < data_length; ++i) + { + data[i] = value; + } + } + + void clear_fec(galois::field_symbol value = 0) + { + for (std::size_t i = 0; i < fec_length; ++i) + { + data[data_length + i] = value; + } + } + + void reset(galois::field_symbol value = 0) + { + clear(value); + errors_detected = 0; + errors_corrected = 0; + zero_numerators = 0; + unrecoverable = false; + error = e_no_error; + } + + template + void copy_state(const BlockType& b) + { + errors_detected = b.errors_detected; + errors_corrected = b.errors_corrected; + zero_numerators = b.zero_numerators; + unrecoverable = b.unrecoverable; + error = static_cast(b.error); + } + + inline std::string error_as_string() const + { + switch (error) + { + case e_no_error : return "No Error"; + case e_encoder_error0 : return "Invalid Encoder"; + case e_encoder_error1 : return "Incompatible Generator Polynomial"; + case e_decoder_error0 : return "Invalid Decoder"; + case e_decoder_error1 : return "Decoder Failure - Non-zero Syndrome"; + case e_decoder_error2 : return "Decoder Failure - Too Many Errors/Erasures"; + case e_decoder_error3 : return "Decoder Failure - Invalid Symbol Correction"; + case e_decoder_error4 : return "Decoder Failure - Invalid Codeword Correction"; + default : return "Invalid Error Code"; + } + } + + std::size_t errors_detected; + std::size_t errors_corrected; + std::size_t zero_numerators; + bool unrecoverable; + error_t error; + galois::field_symbol data[code_length]; + }; + + template + inline void copy(const block& src_block, block& dest_block) + { + for (std::size_t index = 0; index < code_length; ++index) + { + dest_block.data[index] = src_block.data[index]; + } + } + + template + inline void copy(const T src_data[], block& dest_block) + { + for (std::size_t index = 0; index < (code_length - fec_length); ++index, ++src_data) + { + dest_block.data[index] = static_cast::symbol_type>(*src_data); + } + } + + template + inline void copy(const T src_data[], + const std::size_t& src_length, + block& dest_block) + { + for (std::size_t index = 0; index < src_length; ++index, ++src_data) + { + dest_block.data[index] = static_cast::symbol_type>(*src_data); + } + } + + template + inline void copy(const block src_block_stack[stack_size], + block dest_block_stack[stack_size]) + { + for (std::size_t row = 0; row < stack_size; ++row) + { + copy(src_block_stack[row], dest_block_stack[row]); + } + } + + template + inline bool copy(const T src_data[], + const std::size_t src_length, + block dest_block_stack[stack_size]) + { + const std::size_t data_length = code_length - fec_length; + + if (src_length > (stack_size * data_length)) + { + return false; + } + + const std::size_t row_count = src_length / data_length; + + for (std::size_t row = 0; row < row_count; ++row, src_data += data_length) + { + copy(src_data, dest_block_stack[row]); + } + + if ((src_length % data_length) != 0) + { + copy(src_data, src_length % data_length, dest_block_stack[row_count]); + } + + return true; + } + + template + inline void full_copy(const block& src_block, + T dest_data[]) + { + for (std::size_t i = 0; i < code_length; ++i, ++dest_data) + { + (*dest_data) = static_cast(src_block[i]); + } + } + + template + inline void copy(const block src_block_stack[stack_size], + T dest_data[]) + { + const std::size_t data_length = code_length - fec_length; + + for (std::size_t i = 0; i < stack_size; ++i) + { + for (std::size_t j = 0; j < data_length; ++j, ++dest_data) + { + (*dest_data) = static_cast(src_block_stack[i][j]); + } + } + } + + template + inline std::ostream& operator<<(std::ostream& os, const block& rs_block) + { + for (std::size_t i = 0; i < code_length; ++i) + { + os << static_cast(rs_block[i]); + } + + return os; + } + + template + struct data_block + { + public: + + typedef T value_type; + + T& operator[](const std::size_t index) { return data[index]; } + const T& operator[](const std::size_t index) const { return data[index]; } + + T* begin() { return data; } + const T* begin() const { return data; } + + T* end() { return data + block_length; } + const T* end() const { return data + block_length; } + + void clear(T value = 0) + { + for (std::size_t i = 0; i < block_length; ++i) + { + data[i] = value; + } + } + + private: + + T data[block_length]; + }; + + template + inline void copy(const data_block& src_block, data_block& dest_block) + { + for (std::size_t index = 0; index < block_length; ++index) + { + dest_block[index] = src_block[index]; + } + } + + template + inline void copy(const data_block src_block_stack[stack_size], + data_block dest_block_stack[stack_size]) + { + for (std::size_t row = 0; row < stack_size; ++row) + { + copy(src_block_stack[row], dest_block_stack[row]); + } + } + + template + inline void full_copy(const data_block& src_block, T dest_data[]) + { + for (std::size_t i = 0; i < block_length; ++i, ++dest_data) + { + (*dest_data) = static_cast(src_block[i]); + } + } + + typedef std::vector erasure_locations_t; + + } // namespace reed_solomon + +} // namepsace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_codec_validator.hpp b/modem/fec/schifra_reed_solomon_codec_validator.hpp new file mode 100644 index 0000000..3057c39 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_codec_validator.hpp @@ -0,0 +1,998 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_CODEC_VALIDATOR_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_CODEC_VALIDATOR_HPP + + +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_sequential_root_generator_polynomial_creator.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_ecc_traits.hpp" +#include "schifra_error_processes.hpp" +#include "schifra_utilities.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template , + typename decoder_type = decoder, + std::size_t data_length = code_length - fec_length> + class codec_validator + { + public: + + typedef block block_type; + + codec_validator(const galois::field& gf, + const unsigned int gpii, + const std::string& msg) + : field_(gf), + generator_polynomial_(galois::field_polynomial(field_)), + rs_encoder_(reinterpret_cast(0)), + rs_decoder_(reinterpret_cast(0)), + message(msg), + genpoly_initial_index_(gpii), + blocks_processed_(0), + block_failures_(0) + { + traits::equivalent_encoder_decoder(); + + if ( + !make_sequential_root_generator_polynomial(field_, + genpoly_initial_index_, + fec_length, + generator_polynomial_) + ) + { + return; + } + + rs_encoder_ = new encoder_type(field_,generator_polynomial_); + rs_decoder_ = new decoder_type(field_,genpoly_initial_index_); + + if (!rs_encoder_->encode(message,rs_block_original)) + { + std::cout << "codec_validator() - ERROR: Encoding process failed!" << std::endl; + return; + } + } + + bool execute() + { + schifra::utils::timer timer; + timer.start(); + + bool result = stage1() && + stage2() && + stage3() && + stage4() && + stage5() && + stage6() && + stage7() && + stage8() && + stage9() && + stage10() && + stage11() && + stage12() ; + + timer.stop(); + + double time = timer.time(); + + print_codec_properties(); + std::cout << "Blocks decoded: " << blocks_processed_ << + "\tDecoding Failures: " << block_failures_ << + "\tRate: " << ((blocks_processed_ * data_length) * 8.0) / (1048576.0 * time) << "Mbps" << std::endl; + /* + Note: The throughput rate is not only the throughput of reed solomon + encoding and decoding, but also that of the steps needed to add + simulated transmission errors to the reed solomon block such as + the calculation of the positions and additions of errors and + erasures to the reed solomon block, which normally in a true + data transmission medium would not be taken into consideration. + */ + return result; + } + + ~codec_validator() + { + delete rs_encoder_; + delete rs_decoder_; + } + + void print_codec_properties() + { + std::cout << "Codec: RS(" << code_length << "," << data_length << "," << fec_length <<") "; + } + + private: + + bool stage1() + { + /* Burst Error Only Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + for (std::size_t error_count = 1; error_count <= (fec_length >> 1); ++error_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors + ( + rs_block, + error_count, + start_position, + 1 + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage1() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage1() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage1() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != error_count) + { + print_codec_properties(); + std::cout << "stage1() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != error_count) + { + print_codec_properties(); + std::cout << "stage1() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage2() + { + /* Burst Erasure Only Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_erasures + ( + rs_block, + erasure_list, + erasure_count, + start_position, + 1 + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage2() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + std::cout << "stage2() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage2() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != erasure_count) + { + print_codec_properties(); + std::cout << "stage2() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != erasure_count) + { + print_codec_properties(); + std::cout << "stage2() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage3() + { + /* Consecutive Burst Erasure and Error Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::erasures_errors, + start_position,erasure_count, + erasure_list + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage3() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage3() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage3() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage4() + { + /* Consecutive Burst Error and Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::errors_erasures, + start_position, + erasure_count, + erasure_list + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage4() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage4() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage4() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage5() + { + /* Distanced Burst Erasure and Error Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t between_distance = 1; between_distance <= 10; ++between_distance) + { + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::erasures_errors, + start_position, + erasure_count, + erasure_list, + between_distance + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage5() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage5() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage5() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage6() + { + /* Distanced Burst Error and Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t between_distance = 1; between_distance <= 10; ++between_distance) + { + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_errors_erasures + ( + rs_block, + error_mode::errors_erasures, + start_position, + erasure_count, + erasure_list,between_distance + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage6() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage6() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage6() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + erasure_list.clear(); + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage7() + { + /* Intermittent Error Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + for (std::size_t error_count = 1; error_count < (fec_length >> 1); ++error_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + for (std::size_t scale = 1; scale < 5; ++scale) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors + ( + rs_block, + error_count, + start_position, + scale + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage7() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage7() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage7() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != error_count) + { + print_codec_properties(); + std::cout << "stage7() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != error_count) + { + print_codec_properties(); + std::cout << "stage7() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage8() + { + /* Intermittent Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + for (std::size_t scale = 4; scale < 5; ++scale) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_erasures + ( + rs_block, + erasure_list, + erasure_count, + start_position, + scale + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage8() - Decoding Failure! start position: " << start_position << "\t scale: " << scale << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage8() - Error Correcting Failure! start position: " << start_position << "\t scale: " << scale < erasure_count) + { + print_codec_properties(); + std::cout << "stage8() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected > erasure_count) + { + print_codec_properties(); + std::cout << "stage8() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + ++blocks_processed_; + erasure_list.clear(); + } + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage9() + { + /* Burst Interleaved Error and Erasure Combinations */ + + const std::size_t initial_failure_count = block_failures_; + + erasure_locations_t erasure_list; + + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block_type rs_block = rs_block_original; + + corrupt_message_interleaved_errors_erasures + ( + rs_block, + start_position, + erasure_count, + erasure_list + ); + + if (!rs_decoder_->decode(rs_block,erasure_list)) + { + print_codec_properties(); + std::cout << "stage9() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage9() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage9() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + ++blocks_processed_; + erasure_list.clear(); + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage10() + { + /* Segmented Burst Errors */ + + const std::size_t initial_failure_count = block_failures_; + + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + for (std::size_t distance_between_blocks = 0; distance_between_blocks < 5; ++distance_between_blocks) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors_segmented + ( + rs_block, + start_position, + distance_between_blocks + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage10() - Decoding Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage10() - Error Correcting Failure! start position: " << start_position << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage10() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + + return (block_failures_ == initial_failure_count); + } + + bool stage11() + { + /* No Errors */ + + const std::size_t initial_failure_count = block_failures_; + + block_type rs_block = rs_block_original; + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage11() - Decoding Failure!" << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != 0) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != 0) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + else if (rs_block.unrecoverable) + { + print_codec_properties(); + std::cout << "stage11() - Error Correcting Failure!" << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + + return (block_failures_ == initial_failure_count); + } + + bool stage12() + { + /* Random Errors Only */ + + const std::size_t initial_failure_count = block_failures_; + + std::vector random_error_index; + generate_error_index((fec_length >> 1),random_error_index,0xA5A5A5A5); + + for (std::size_t error_count = 1; error_count <= (fec_length >> 1); ++error_count) + { + for (std::size_t error_index = 0; error_index < error_index_size; ++error_index) + { + block_type rs_block = rs_block_original; + + corrupt_message_all_errors_at_index + ( + rs_block, + error_count, + error_index, + random_error_index + ); + + if (!rs_decoder_->decode(rs_block)) + { + print_codec_properties(); + std::cout << "stage12() - Decoding Failure! error index: " << error_index << std::endl; + ++block_failures_; + } + else if (!is_block_equivelent(rs_block,message)) + { + print_codec_properties(); + std::cout << "stage12() - Error Correcting Failure! error index: " << error_index << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != rs_block.errors_corrected) + { + print_codec_properties(); + std::cout << "stage12() - Discrepancy between the number of errors detected and corrected. [" << rs_block.errors_detected << "," << rs_block.errors_corrected << "]" << std::endl; + ++block_failures_; + } + else if (rs_block.errors_detected != error_count) + { + print_codec_properties(); + std::cout << "stage12() - Error In The Number Of Detected Errors! Errors Detected: " << rs_block.errors_detected << std::endl; + ++block_failures_; + } + else if (rs_block.errors_corrected != error_count) + { + print_codec_properties(); + std::cout << "stage12() - Error In The Number Of Corrected Errors! Errors Corrected: " << rs_block.errors_corrected << std::endl; + ++block_failures_; + } + + ++blocks_processed_; + } + } + + return (block_failures_ == initial_failure_count); + } + + protected: + + codec_validator() {} + + private: + + codec_validator(const codec_validator&); + const codec_validator& operator=(const codec_validator&); + + const galois::field& field_; + galois::field_polynomial generator_polynomial_; + encoder_type* rs_encoder_; + decoder_type* rs_decoder_; + block_type rs_block_original; + const std::string& message; + const unsigned int genpoly_initial_index_; + unsigned int blocks_processed_; + unsigned int block_failures_; + }; + + template + void create_messages(std::vector& message_list, const bool full_test_set = false) + { + /* Various message bit patterns */ + + message_list.clear(); + + if (full_test_set) + { + for (std::size_t i = 0; i < 256; ++i) + { + message_list.push_back(std::string(data_length, static_cast(i))); + } + } + else + { + message_list.push_back(std::string(data_length,static_cast(0x00))); + message_list.push_back(std::string(data_length,static_cast(0xAA))); + message_list.push_back(std::string(data_length,static_cast(0xA5))); + message_list.push_back(std::string(data_length,static_cast(0xAC))); + message_list.push_back(std::string(data_length,static_cast(0xCA))); + message_list.push_back(std::string(data_length,static_cast(0x5A))); + message_list.push_back(std::string(data_length,static_cast(0xCC))); + message_list.push_back(std::string(data_length,static_cast(0xF0))); + message_list.push_back(std::string(data_length,static_cast(0x0F))); + message_list.push_back(std::string(data_length,static_cast(0xFF))); + message_list.push_back(std::string(data_length,static_cast(0x92))); + message_list.push_back(std::string(data_length,static_cast(0x6D))); + message_list.push_back(std::string(data_length,static_cast(0x77))); + message_list.push_back(std::string(data_length,static_cast(0x7A))); + message_list.push_back(std::string(data_length,static_cast(0xA7))); + message_list.push_back(std::string(data_length,static_cast(0xE5))); + message_list.push_back(std::string(data_length,static_cast(0xEB))); + } + + std::string tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0xFF); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0xFF)); + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0x00); + } + + message_list.push_back(tmp_str); + } + + template + inline bool codec_validation_test(const std::size_t prim_poly_size,const unsigned int prim_poly[]) + { + const unsigned int data_length = code_length - fec_length; + + galois::field field(field_descriptor,prim_poly_size,prim_poly); + std::vector message_list; + create_messages(message_list); + + for (std::size_t i = 0; i < message_list.size(); ++i) + { + codec_validator + validator(field, gen_poly_index, message_list[i]); + + if (!validator.execute()) + { + return false; + } + } + + return true; + } + + template + inline bool shortened_codec_validation_test(const std::size_t prim_poly_size,const unsigned int prim_poly[]) + { + typedef shortened_encoder encoder_type; + typedef shortened_decoder decoder_type; + + const unsigned int data_length = code_length - fec_length; + + galois::field field(field_descriptor,prim_poly_size,prim_poly); + std::vector message_list; + create_messages(message_list); + + for (std::size_t i = 0; i < message_list.size(); ++i) + { + codec_validator + validator(field,gen_poly_index,message_list[i]); + + if (!validator.execute()) + { + return false; + } + + } + + return true; + } + + inline bool codec_validation_test00() + { + return codec_validation_test<8,120,255, 2>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 4>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 6>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 12>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 22>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 24>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 32>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 64>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 80>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255, 96>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && + codec_validation_test<8,120,255,128>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) ; + } + + inline bool codec_validation_test01() + { + return shortened_codec_validation_test<8,120,126,14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 1 RS Code */ + shortened_codec_validation_test<8,120,194,16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 2 RS Code */ + shortened_codec_validation_test<8,120,219,18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 3 RS Code */ + shortened_codec_validation_test<8,120,225,20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) && /* Intelsat 4 RS Code */ + shortened_codec_validation_test<8, 1,204,16>(galois::primitive_polynomial_size05,galois::primitive_polynomial05) && /* DBV/MPEG-2 TSP RS Code */ + shortened_codec_validation_test<8, 1,104,27>(galois::primitive_polynomial_size05,galois::primitive_polynomial05) && /* Magnetic Storage Outer RS Code */ + shortened_codec_validation_test<8, 1,204,12>(galois::primitive_polynomial_size05,galois::primitive_polynomial05) && /* Magnetic Storage Inner RS Code */ + shortened_codec_validation_test<8,120, 72,10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06) ; /* VDL Mode 3 RS Code */ + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_decoder.hpp b/modem/fec/schifra_reed_solomon_decoder.hpp new file mode 100644 index 0000000..498e133 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_decoder.hpp @@ -0,0 +1,485 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_DECODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_DECODER_HPP + + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + namespace reed_solomon + { + + template + class decoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + + decoder(const galois::field& field, const unsigned int& gen_initial_index = 0) + : decoder_valid_(field.size() == code_length), + field_(field), + X_(galois::generate_X(field_)), + gen_initial_index_(gen_initial_index) + { + if (decoder_valid_) + { + //Note: code_length and field size can be used interchangeably + create_lookup_tables(); + } + }; + + const galois::field& field() const + { + return field_; + } + + bool decode(block_type& rsblock) const + { + std::vector erasure_list; + return decode(rsblock,erasure_list); + } + + bool decode(block_type& rsblock, const erasure_locations_t& erasure_list) const + { + if ((!decoder_valid_) || (erasure_list.size() > fec_length)) + { + rsblock.errors_detected = 0; + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error0; + + return false; + } + + galois::field_polynomial received(field_,code_length - 1); + load_message(received,rsblock); + + galois::field_polynomial syndrome(field_); + + if (compute_syndrome(received,syndrome) == 0) + { + rsblock.errors_detected = 0; + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = false; + + return true; + } + + galois::field_polynomial lambda(galois::field_element(field_,1)); + + erasure_locations_t erasure_locations; + + if (!erasure_list.empty()) + { + prepare_erasure_list(erasure_locations, erasure_list); + + compute_gamma(lambda, erasure_locations); + } + + if (erasure_list.size() < fec_length) + { + modified_berlekamp_massey_algorithm(lambda, syndrome, erasure_list.size()); + } + + std::vector error_locations; + + find_roots(lambda, error_locations); + + if (0 == error_locations.size()) + { + /* + Syndrome is non-zero yet no error locations have + been obtained, conclusion: + It is possible that there are MORE errrors in the + message than can be detected and corrected for this + particular code. + */ + + rsblock.errors_detected = 0; + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error1; + + return false; + } + else if (((2 * error_locations.size()) - erasure_list.size()) > fec_length) + { + /* + Too many errors\erasures! 2E + S <= fec_length + L = E + S + E = L - S + 2E = 2L - 2S + 2E + S = 2L - 2S + S + = 2L - S + Where: + L : Error Locations + E : Errors + S : Erasures + + */ + + rsblock.errors_detected = error_locations.size(); + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error2; + + return false; + } + else + rsblock.errors_detected = error_locations.size(); + + return forney_algorithm(error_locations, lambda, syndrome, rsblock); + } + + private: + + decoder(); + decoder(const decoder& dec); + decoder& operator=(const decoder& dec); + + protected: + + void load_message(galois::field_polynomial& received, const block_type& rsblock) const + { + /* + Load message data into received polynomial in reverse order. + */ + + for (std::size_t i = 0; i < code_length; ++i) + { + received[code_length - 1 - i] = rsblock[i]; + } + } + + void create_lookup_tables() + { + root_exponent_table_.reserve(field_.size() + 1); + + for (int i = 0; i < static_cast(field_.size() + 1); ++i) + { + root_exponent_table_.push_back(field_.exp(field_.alpha(code_length - i),(1 - gen_initial_index_))); + } + + syndrome_exponent_table_.reserve(fec_length); + + for (int i = 0; i < static_cast(fec_length); ++i) + { + syndrome_exponent_table_.push_back(field_.alpha(gen_initial_index_ + i)); + } + + gamma_table_.reserve(field_.size() + 1); + + for (int i = 0; i < static_cast(field_.size() + 1); ++i) + { + gamma_table_.push_back((1 + (X_ * galois::field_element(field_,field_.alpha(i))))); + } + } + + void prepare_erasure_list(erasure_locations_t& erasure_locations, const erasure_locations_t& erasure_list) const + { + /* + Note: 1. Erasure positions must be unique. + 2. Erasure positions must exist within the code block. + There are NO exceptions to these rules! + */ + + erasure_locations.resize(erasure_list.size()); + + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + erasure_locations[i] = (code_length - 1 - erasure_list[i]); + } + } + + int compute_syndrome(const galois::field_polynomial& received, + galois::field_polynomial& syndrome) const + { + int error_flag = 0; + syndrome = galois::field_polynomial(field_,fec_length - 1); + + for (std::size_t i = 0; i < fec_length; ++i) + { + syndrome[i] = received(syndrome_exponent_table_[i]); + error_flag |= syndrome[i].poly(); + } + + return error_flag; + } + + void compute_gamma(galois::field_polynomial& gamma, const erasure_locations_t& erasure_locations) const + { + for (std::size_t i = 0; i < erasure_locations.size(); ++i) + { + gamma *= gamma_table_[erasure_locations[i]]; + } + } + + void find_roots(const galois::field_polynomial& poly, std::vector& root_list) const + { + /* + Chien Search: Find the roots of the error locator polynomial + via an exhaustive search over all non-zero elements in the + given finite field. + */ + + root_list.reserve(fec_length << 1); + root_list.resize(0); + + const std::size_t polynomial_degree = poly.deg(); + + for (int i = 1; i <= static_cast(code_length); ++i) + { + if (0 == poly(field_.alpha(i)).poly()) + { + root_list.push_back(i); + + if (polynomial_degree == root_list.size()) + { + break; + } + } + } + } + + void compute_discrepancy(galois::field_element& discrepancy, + const galois::field_polynomial& lambda, + const galois::field_polynomial& syndrome, + const std::size_t& l, + const std::size_t& round) const + { + /* + Compute the lambda discrepancy at the current round of BMA + */ + + const std::size_t upper_bound = std::min(static_cast(l), lambda.deg()); + + discrepancy = 0; + + for (std::size_t i = 0; i <= upper_bound; ++i) + { + discrepancy += lambda[i] * syndrome[round - i]; + } + } + + void modified_berlekamp_massey_algorithm(galois::field_polynomial& lambda, + const galois::field_polynomial& syndrome, + const std::size_t erasure_count) const + { + /* + Modified Berlekamp-Massey Algorithm + Identify the shortest length linear feed-back shift register (LFSR) + that will generate the sequence equivalent to the syndrome. + */ + + int i = -1; + std::size_t l = erasure_count; + + galois::field_element discrepancy(field_,0); + galois::field_polynomial previous_lambda = lambda << 1; + + for (std::size_t round = erasure_count; round < fec_length; ++round) + { + compute_discrepancy(discrepancy, lambda, syndrome, l, round); + + if (discrepancy != 0) + { + galois::field_polynomial tau = lambda - (discrepancy * previous_lambda); + + if (static_cast(l) < (static_cast(round) - i)) + { + const std::size_t tmp = round - i; + i = static_cast(round - l); + l = tmp; + previous_lambda = lambda / discrepancy; + } + + lambda = tau; + } + + previous_lambda <<= 1; + } + } + + bool forney_algorithm(const std::vector& error_locations, + const galois::field_polynomial& lambda, + const galois::field_polynomial& syndrome, + block_type& rsblock) const + { + /* + The Forney algorithm for computing the error magnitudes + */ + const galois::field_polynomial omega = (lambda * syndrome) % fec_length; + const galois::field_polynomial lambda_derivative = lambda.derivative(); + + rsblock.errors_corrected = 0; + rsblock.zero_numerators = 0; + + for (std::size_t i = 0; i < error_locations.size(); ++i) + { + const unsigned int error_location = error_locations[i]; + const galois::field_symbol alpha_inverse = field_.alpha(error_location); + const galois::field_symbol numerator = (omega(alpha_inverse) * root_exponent_table_[error_location]).poly(); + const galois::field_symbol denominator = lambda_derivative(alpha_inverse).poly(); + + if (0 != numerator) + { + if (0 != denominator) + { + rsblock[error_location - 1] ^= field_.div(numerator, denominator); + rsblock.errors_corrected++; + } + else + { + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error3; + return false; + } + } + else + ++rsblock.zero_numerators; + } + + if (lambda.deg() == static_cast(rsblock.errors_detected)) + return true; + else + { + rsblock.unrecoverable = true; + rsblock.error = block_type::e_decoder_error4; + return false; + } + } + + protected: + + bool decoder_valid_; + const galois::field& field_; + std::vector root_exponent_table_; + std::vector syndrome_exponent_table_; + std::vector gamma_table_; + const galois::field_polynomial X_; + const unsigned int gen_initial_index_; + }; + + template + class shortened_decoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + + shortened_decoder(const galois::field& field, const unsigned int gen_initial_index = 0) + : decoder_(field, gen_initial_index) + {} + + inline bool decode(block_type& rsblock, const erasure_locations_t& erasure_list) const + { + typename natural_decoder_type::block_type block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < code_length; ++i) + { + block.data[padding_length + i] = rsblock.data[i]; + } + + erasure_locations_t shifted_position_erasure_list(erasure_list.size(),0); + + for (std::size_t i = 0; i < erasure_list.size(); ++i) + { + shifted_position_erasure_list[i] = erasure_list[i] + padding_length; + } + + if (decoder_.decode(block, shifted_position_erasure_list)) + { + for (std::size_t i = 0; i < code_length; ++i) + { + rsblock.data[i] = block.data[padding_length + i]; + } + + rsblock.copy_state(block); + return true; + } + else + { + rsblock.copy_state(block); + return false; + } + } + + inline bool decode(block_type& rsblock) const + { + typename natural_decoder_type::block_type block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < code_length; ++i) + { + block.data[padding_length + i] = rsblock.data[i]; + } + + if (decoder_.decode(block)) + { + for (std::size_t i = 0; i < code_length; ++i) + { + rsblock.data[i] = block.data[padding_length + i]; + } + + rsblock.copy_state(block); + return true; + } + else + { + rsblock.copy_state(block); + return false; + } + } + + private: + + typedef decoder natural_decoder_type; + const natural_decoder_type decoder_; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_encoder.hpp b/modem/fec/schifra_reed_solomon_encoder.hpp new file mode 100644 index 0000000..87641b8 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_encoder.hpp @@ -0,0 +1,204 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_ENCODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_ENCODER_HPP + + +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class encoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + + encoder(const galois::field& gfield, const galois::field_polynomial& generator) + : encoder_valid_(code_length == gfield.size()), + field_(gfield), + generator_(generator) + {} + + ~encoder() + {} + + inline bool encode(block_type& rsblock) const + { + if (!encoder_valid_) + { + rsblock.error = block_type::e_encoder_error0; + return false; + } + + const galois::field_polynomial parities = msg_poly(rsblock) % generator_; + const galois::field_symbol mask = field_.mask(); + + if (parities.deg() == (fec_length - 1)) + { + for (std::size_t i = 0; i < fec_length; ++i) + { + rsblock.fec(i) = parities[fec_length - 1 - i].poly() & mask; + } + } + else + { + /* + Note: Encoder should never branch here. + Possible issues to look for: + 1. Generator polynomial degree is not equivelent to fec length + 2. Field and code length are not consistent. + + */ + rsblock.error = block_type::e_encoder_error1; + return false; + } + + return true; + } + + inline bool encode(const std::string& data, block_type& rsblock) const + { + std::string::const_iterator itr = data.begin(); + const galois::field_symbol mask = field_.mask(); + + for (std::size_t i = 0; i < data_length; ++i, ++itr) + { + rsblock.data[i] = static_cast(*itr) & mask; + } + + return encode(rsblock); + } + + private: + + encoder(); + encoder(const encoder& enc); + encoder& operator=(const encoder& enc); + + inline galois::field_polynomial msg_poly(const block_type& rsblock) const + { + galois::field_polynomial message(field_, code_length); + + for (std::size_t i = fec_length; i < code_length; ++i) + { + message[i] = rsblock.data[code_length - 1 - i]; + } + + return message; + } + + const bool encoder_valid_; + const galois::field& field_; + const galois::field_polynomial generator_; + }; + + template + class shortened_encoder + { + public: + + typedef traits::reed_solomon_triat trait; + typedef block block_type; + typedef block short_block_t; + + shortened_encoder(const galois::field& gfield, + const galois::field_polynomial& generator) + : encoder_(gfield, generator) + {} + + inline bool encode(block_type& rsblock) const + { + short_block_t block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < data_length; ++i) + { + block.data[padding_length + i] = rsblock.data[i]; + } + + if (encoder_.encode(block)) + { + for (std::size_t i = 0; i < fec_length; ++i) + { + rsblock.fec(i) = block.fec(i); + } + + return true; + } + else + return false; + } + + inline bool encode(const std::string& data, block_type& rsblock) const + { + short_block_t block; + + std::fill_n(&block[0], padding_length, typename block_type::symbol_type(0)); + + for (std::size_t i = 0; i < data_length; ++i) + { + block.data[padding_length + i] = data[i]; + } + + if (encoder_.encode(block)) + { + for (std::size_t i = 0; i < code_length; ++i) + { + rsblock.data[i] = block.data[padding_length + i]; + } + + return true; + } + else + return false; + } + + private: + + const encoder encoder_; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_file_decoder.hpp b/modem/fec/schifra_reed_solomon_file_decoder.hpp new file mode 100644 index 0000000..f189868 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_file_decoder.hpp @@ -0,0 +1,171 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_FILE_DECODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_FILE_DECODER_HPP + + +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class file_decoder + { + public: + + typedef decoder decoder_type; + typedef typename decoder_type::block_type block_type; + + file_decoder(const decoder_type& decoder, + const std::string& input_file_name, + const std::string& output_file_name) + : current_block_index_(0) + { + std::size_t remaining_bytes = schifra::fileio::file_size(input_file_name); + + if (remaining_bytes == 0) + { + std::cout << "reed_solomon::file_decoder() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + if (!in_stream) + { + std::cout << "reed_solomon::file_decoder() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + if (!out_stream) + { + std::cout << "reed_solomon::file_decoder() - Error: output file could not be created." << std::endl; + return; + } + + current_block_index_ = 0; + + while (remaining_bytes >= code_length) + { + process_complete_block(decoder,in_stream,out_stream); + remaining_bytes -= code_length; + current_block_index_++; + } + + if (remaining_bytes > 0) + { + process_partial_block(decoder,in_stream,out_stream,remaining_bytes); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_complete_block(const decoder_type& decoder, + std::ifstream& in_stream, + std::ofstream& out_stream) + { + in_stream.read(&buffer_[0],static_cast(code_length)); + copy(buffer_,code_length,block_); + + if (!decoder.decode(block_)) + { + std::cout << "reed_solomon::file_decoder.process_complete_block() - Error during decoding of block " << current_block_index_ << "!" << std::endl; + return; + } + + for (std::size_t i = 0; i < data_length; ++i) + { + buffer_[i] = static_cast(block_[i]); + } + + out_stream.write(&buffer_[0],static_cast(data_length)); + } + + inline void process_partial_block(const decoder_type& decoder, + std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t& read_amount) + { + if (read_amount <= fec_length) + { + std::cout << "reed_solomon::file_decoder.process_partial_block() - Error during decoding of block " << current_block_index_ << "!" << std::endl; + return; + } + + in_stream.read(&buffer_[0],static_cast(read_amount)); + + for (std::size_t i = 0; i < (read_amount - fec_length); ++i) + { + block_.data[i] = static_cast(buffer_[i]); + } + + if ((read_amount - fec_length) < data_length) + { + for (std::size_t i = (read_amount - fec_length); i < data_length; ++i) + { + block_.data[i] = 0; + } + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + block_.fec(i) = static_cast(buffer_[(read_amount - fec_length) + i]); + } + + if (!decoder.decode(block_)) + { + std::cout << "reed_solomon::file_decoder.process_partial_block() - Error during decoding of block " << current_block_index_ << "!" << std::endl; + return; + } + + for (std::size_t i = 0; i < (read_amount - fec_length); ++i) + { + buffer_[i] = static_cast(block_.data[i]); + } + + out_stream.write(&buffer_[0],static_cast(read_amount - fec_length)); + } + + block_type block_; + std::size_t current_block_index_; + char buffer_[code_length]; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_file_encoder.hpp b/modem/fec/schifra_reed_solomon_file_encoder.hpp new file mode 100644 index 0000000..98649ab --- /dev/null +++ b/modem/fec/schifra_reed_solomon_file_encoder.hpp @@ -0,0 +1,138 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_FILE_ENCODER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_FILE_ENCODER_HPP + + +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class file_encoder + { + public: + + typedef encoder encoder_type; + typedef typename encoder_type::block_type block_type; + + file_encoder(const encoder_type& encoder, + const std::string& input_file_name, + const std::string& output_file_name) + { + std::size_t remaining_bytes = schifra::fileio::file_size(input_file_name); + if (remaining_bytes == 0) + { + std::cout << "reed_solomon::file_encoder() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + if (!in_stream) + { + std::cout << "reed_solomon::file_encoder() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + if (!out_stream) + { + std::cout << "reed_solomon::file_encoder() - Error: output file could not be created." << std::endl; + return; + } + + std::memset(data_buffer_,0,sizeof(data_buffer_)); + std::memset(fec_buffer_ ,0,sizeof(fec_buffer_ )); + + while (remaining_bytes >= data_length) + { + process_block(encoder,in_stream,out_stream,data_length); + remaining_bytes -= data_length; + } + + if (remaining_bytes > 0) + { + process_block(encoder,in_stream,out_stream,remaining_bytes); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_block(const encoder_type& encoder, + std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t& read_amount) + { + in_stream.read(&data_buffer_[0],static_cast(read_amount)); + for (std::size_t i = 0; i < read_amount; ++i) + { + block_.data[i] = (data_buffer_[i] & 0xFF); + } + + if (read_amount < data_length) + { + for (std::size_t i = read_amount; i < data_length; ++i) + { + block_.data[i] = 0x00; + } + } + + if (!encoder.encode(block_)) + { + std::cout << "reed_solomon::file_encoder.process_block() - Error during encoding of block!" << std::endl; + return; + } + + for (std::size_t i = 0; i < fec_length; ++i) + { + fec_buffer_[i] = static_cast(block_.fec(i) & 0xFF); + } + + out_stream.write(&data_buffer_[0],static_cast(read_amount)); + out_stream.write(&fec_buffer_[0],fec_length); + } + + block_type block_; + char data_buffer_[data_length]; + char fec_buffer_[fec_length]; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_file_interleaver.hpp b/modem/fec/schifra_reed_solomon_file_interleaver.hpp new file mode 100644 index 0000000..54cd7b4 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_file_interleaver.hpp @@ -0,0 +1,247 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_FILE_INTERLEAVER_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_FILE_INTERLEAVER_HPP + + +#include +#include + +#include "schifra_reed_solomon_interleaving.hpp" +#include "schifra_fileio.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + class file_interleaver + { + public: + + file_interleaver(const std::string& input_file_name, + const std::string& output_file_name) + { + std::size_t remaining_bytes = schifra::fileio::file_size(input_file_name); + + if (0 == remaining_bytes) + { + std::cout << "reed_solomon::file_interleaver() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + + if (!in_stream) + { + std::cout << "reed_solomon::file_interleaver() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + + if (!out_stream) + { + std::cout << "reed_solomon::file_interleaver() - Error: output file could not be created." << std::endl; + return; + } + + while (remaining_bytes >= (block_length * stack_size)) + { + process_block(in_stream,out_stream); + remaining_bytes -= (block_length * stack_size); + } + + if (remaining_bytes > 0) + { + process_incomplete_block(in_stream,out_stream,remaining_bytes); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_block(std::ifstream& in_stream, + std::ofstream& out_stream) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + interleave(block_stack_); + + for (std::size_t i = 0; i < stack_size; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + } + + inline void process_incomplete_block(std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t amount) + { + std::size_t complete_row_count = amount / block_length; + std::size_t remainder = amount % block_length; + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + in_stream.read(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + + if (remainder == 0) + interleave(block_stack_,complete_row_count); + else + interleave(block_stack_,complete_row_count + 1,remainder); + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + out_stream.write(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + } + + data_block block_stack_[stack_size]; + + }; + + template + class file_deinterleaver + { + public: + + file_deinterleaver(const std::string& input_file_name, + const std::string& output_file_name) + { + std::size_t input_file_size = schifra::fileio::file_size(input_file_name); + + if (input_file_size == 0) + { + std::cout << "reed_solomon::file_deinterleaver() - Error: input file has ZERO size." << std::endl; + return; + } + + std::ifstream in_stream(input_file_name.c_str(),std::ios::binary); + + if (!in_stream) + { + std::cout << "reed_solomon::file_deinterleaver() - Error: input file could not be opened." << std::endl; + return; + } + + std::ofstream out_stream(output_file_name.c_str(),std::ios::binary); + + if (!out_stream) + { + std::cout << "reed_solomon::file_deinterleaver() - Error: output file could not be created." << std::endl; + return; + } + + for (std::size_t i = 0; i < (input_file_size / (block_length * stack_size)); ++i) + { + process_block(in_stream,out_stream); + } + + if ((input_file_size % (block_length * stack_size)) != 0) + { + process_incomplete_block(in_stream,out_stream,(input_file_size % (block_length * stack_size))); + } + + in_stream.close(); + out_stream.close(); + } + + private: + + inline void process_block(std::ifstream& in_stream, + std::ofstream& out_stream) + { + for (std::size_t i = 0; i < stack_size; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + deinterleave(block_stack_); + + for (std::size_t i = 0; i < stack_size; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + } + + inline void process_incomplete_block(std::ifstream& in_stream, + std::ofstream& out_stream, + const std::size_t amount) + { + std::size_t complete_row_count = amount / block_length; + std::size_t remainder = amount % block_length; + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + in_stream.read(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + in_stream.read(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + + if (remainder == 0) + deinterleave(block_stack_,complete_row_count); + else + deinterleave(block_stack_,complete_row_count + 1,remainder); + + for (std::size_t i = 0; i < complete_row_count; ++i) + { + out_stream.write(&block_stack_[i][0],static_cast(block_length)); + } + + if (remainder != 0) + { + out_stream.write(&block_stack_[complete_row_count][0],static_cast(remainder)); + } + } + + data_block block_stack_[stack_size]; + + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_general_codec.hpp b/modem/fec/schifra_reed_solomon_general_codec.hpp new file mode 100644 index 0000000..a73ee30 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_general_codec.hpp @@ -0,0 +1,210 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_GENERAL_CODEC_HPP +#define INCLUDE_SCHIFRA_REED_GENERAL_CODEC_HPP + + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_polynomial.hpp" +#include "schifra_sequential_root_generator_polynomial_creator.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + void* create_encoder(const galois::field& field, + const std::size_t& gen_poly_index) + { + const std::size_t data_length = code_length - fec_length; + traits::validate_reed_solomon_code_parameters(); + galois::field_polynomial gen_polynomial(field); + + if ( + !make_sequential_root_generator_polynomial(field, + gen_poly_index, + fec_length, + gen_polynomial) + ) + { + return reinterpret_cast(0); + } + + return new encoder(field,gen_polynomial); + } + + template + void* create_decoder(const galois::field& field, + const std::size_t& gen_poly_index) + { + const std::size_t data_length = code_length - fec_length; + traits::validate_reed_solomon_code_parameters(); + return new decoder(field,static_cast(gen_poly_index)); + } + + template + class general_codec + { + public: + + general_codec(const galois::field& field, + const std::size_t& gen_poly_index) + { + for (std::size_t i = 0; i < max_fec_length; ++i) + { + encoder_[i] = 0; + decoder_[i] = 0; + } + + encoder_[ 2] = create_encoder(field, gen_poly_index); + encoder_[ 4] = create_encoder(field, gen_poly_index); + encoder_[ 6] = create_encoder(field, gen_poly_index); + encoder_[ 8] = create_encoder(field, gen_poly_index); + encoder_[ 10] = create_encoder(field, gen_poly_index); + encoder_[ 12] = create_encoder(field, gen_poly_index); + encoder_[ 14] = create_encoder(field, gen_poly_index); + encoder_[ 16] = create_encoder(field, gen_poly_index); + encoder_[ 18] = create_encoder(field, gen_poly_index); + encoder_[ 20] = create_encoder(field, gen_poly_index); + encoder_[ 22] = create_encoder(field, gen_poly_index); + encoder_[ 24] = create_encoder(field, gen_poly_index); + encoder_[ 26] = create_encoder(field, gen_poly_index); + encoder_[ 28] = create_encoder(field, gen_poly_index); + encoder_[ 30] = create_encoder(field, gen_poly_index); + encoder_[ 32] = create_encoder(field, gen_poly_index); + encoder_[ 64] = create_encoder(field, gen_poly_index); + encoder_[ 80] = create_encoder(field, gen_poly_index); + encoder_[ 96] = create_encoder(field, gen_poly_index); + encoder_[128] = create_encoder(field, gen_poly_index); + + decoder_[ 2] = create_decoder(field, gen_poly_index); + decoder_[ 4] = create_decoder(field, gen_poly_index); + decoder_[ 6] = create_decoder(field, gen_poly_index); + decoder_[ 8] = create_decoder(field, gen_poly_index); + decoder_[ 10] = create_decoder(field, gen_poly_index); + decoder_[ 12] = create_decoder(field, gen_poly_index); + decoder_[ 14] = create_decoder(field, gen_poly_index); + decoder_[ 16] = create_decoder(field, gen_poly_index); + decoder_[ 18] = create_decoder(field, gen_poly_index); + decoder_[ 20] = create_decoder(field, gen_poly_index); + decoder_[ 22] = create_decoder(field, gen_poly_index); + decoder_[ 24] = create_decoder(field, gen_poly_index); + decoder_[ 26] = create_decoder(field, gen_poly_index); + decoder_[ 28] = create_decoder(field, gen_poly_index); + decoder_[ 30] = create_decoder(field, gen_poly_index); + decoder_[ 32] = create_decoder(field, gen_poly_index); + decoder_[ 64] = create_decoder(field, gen_poly_index); + decoder_[ 80] = create_decoder(field, gen_poly_index); + decoder_[ 96] = create_decoder(field, gen_poly_index); + decoder_[128] = create_decoder(field, gen_poly_index); + } + + ~general_codec() + { + delete static_cast*>(encoder_[ 2]); + delete static_cast*>(encoder_[ 4]); + delete static_cast*>(encoder_[ 6]); + delete static_cast*>(encoder_[ 8]); + delete static_cast*>(encoder_[ 10]); + delete static_cast*>(encoder_[ 12]); + delete static_cast*>(encoder_[ 14]); + delete static_cast*>(encoder_[ 16]); + delete static_cast*>(encoder_[ 18]); + delete static_cast*>(encoder_[ 20]); + delete static_cast*>(encoder_[ 22]); + delete static_cast*>(encoder_[ 24]); + delete static_cast*>(encoder_[ 26]); + delete static_cast*>(encoder_[ 28]); + delete static_cast*>(encoder_[ 30]); + delete static_cast*>(encoder_[ 32]); + delete static_cast*>(encoder_[ 64]); + delete static_cast*>(encoder_[ 80]); + delete static_cast*>(encoder_[ 96]); + delete static_cast*>(encoder_[128]); + + delete static_cast*>(decoder_[ 2]); + delete static_cast*>(decoder_[ 4]); + delete static_cast*>(decoder_[ 6]); + delete static_cast*>(decoder_[ 8]); + delete static_cast*>(decoder_[ 10]); + delete static_cast*>(decoder_[ 12]); + delete static_cast*>(decoder_[ 14]); + delete static_cast*>(decoder_[ 16]); + delete static_cast*>(decoder_[ 18]); + delete static_cast*>(decoder_[ 20]); + delete static_cast*>(decoder_[ 22]); + delete static_cast*>(decoder_[ 24]); + delete static_cast*>(decoder_[ 26]); + delete static_cast*>(decoder_[ 28]); + delete static_cast*>(decoder_[ 30]); + delete static_cast*>(decoder_[ 32]); + delete static_cast*>(decoder_[ 64]); + delete static_cast*>(decoder_[ 80]); + delete static_cast*>(decoder_[ 96]); + delete static_cast*>(decoder_[128]); + } + + template + bool encode(Block& block) const + { + /* + cl : code length + fl : fec length + */ + typedef reed_solomon::encoder encoder_type; + traits::__static_assert__<(Block::trait::fec_length <= max_fec_length)>(); + if (encoder_[Block::trait::fec_length] == 0) + return false; + else + return static_cast(encoder_[Block::trait::fec_length])->encode(block); + } + + template + bool decode(Block& block) const + { + typedef reed_solomon::decoder decoder_type; + traits::__static_assert__<(Block::trait::fec_length <= max_fec_length)>(); + if (decoder_[Block::trait::fec_length] == 0) + return false; + else + return static_cast(decoder_[Block::trait::fec_length])->decode(block); + } + + private: + + void* encoder_[max_fec_length + 1]; + void* decoder_[max_fec_length + 1]; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_interleaving.hpp b/modem/fec/schifra_reed_solomon_interleaving.hpp new file mode 100644 index 0000000..0f62290 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_interleaving.hpp @@ -0,0 +1,639 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_INTERLEAVING_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_INTERLEAVING_HPP + + +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + inline void interleave(block (&block_stack)[code_length]) + { + for (std::size_t i = 0; i < code_length; ++i) + { + for (std::size_t j = i + 1; j < code_length; ++j) + { + typename block::symbol_type tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + template + inline void interleave(block (&block_stack)[row_count]) + { + block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < code_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == code_length) + { + aux_index = 0; + aux_row++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void interleave(block (&block_stack)[row_count], + const std::size_t partial_code_length) + { + if (partial_code_length == code_length) + { + interleave(block_stack); + } + else + { + block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < partial_code_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == code_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t index = partial_code_length; index < code_length; ++index) + { + for (std::size_t row = 0; row < row_count - 1; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == code_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < code_length - fec_length; ++index) + { + block_stack[row].data[index] = auxiliary_stack[row].data[index]; + } + for (std::size_t index = 0; index < fec_length; ++index) + { + block_stack[row].fec[index] = auxiliary_stack[row].fec[index]; + } + } + + for (std::size_t index = 0; index < partial_code_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + } + } + + template + inline void interleave(data_block (&block_stack)[block_length]) + { + for (std::size_t i = 0; i < block_length; ++i) + { + for (std::size_t j = i + 1; j < block_length; ++j) + { + T tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + template + inline void interleave(data_block (&block_stack)[row_count]) + { + data_block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void interleave(data_block (&block_stack)[row_count], + const std::size_t partial_block_length) + { + if (partial_block_length == block_length) + { + interleave(block_stack); + } + else + { + data_block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t index = partial_block_length; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count - 1; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + } + } + + template + inline void interleave(data_block block_stack[], + const std::size_t row_count) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + delete[] auxiliary_stack; + } + + template + inline void interleave(data_block block_stack[], + const std::size_t row_count, + const std::size_t partial_block_length) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t index = partial_block_length; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count - 1; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = 0; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + + delete[] auxiliary_stack; + } + + template + inline void deinterleave(block (&block_stack)[row_count]) + { + block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < code_length; ++index) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_row == row_count) + { + aux_row = 0; + aux_index++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void deinterleave(block (&block_stack)[row_count], + const std::size_t partial_code_length) + { + if (partial_code_length == code_length) + { + deinterleave(block_stack); + } + else + { + block auxiliary_stack[row_count]; + + std::size_t aux_row1 = 0; + std::size_t aux_index1 = 0; + + std::size_t aux_row2 = 0; + std::size_t aux_index2 = 0; + + for (std::size_t i = 0; i < partial_code_length * row_count; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == row_count) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == code_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t i = 0; aux_index1 < code_length; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == (row_count - 1)) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == code_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < code_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_code_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + } + } + + template + inline void deinterleave(data_block (&block_stack)[block_length]) + { + data_block auxiliary_stack[block_length]; + + for (std::size_t row = 0; row < block_length; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + auxiliary_stack[index][row] = block_stack[row][index]; + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void deinterleave(data_block (&block_stack)[row_count]) + { + data_block auxiliary_stack[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_row == row_count) + { + aux_row = 0; + aux_index++; + } + } + } + + copy(auxiliary_stack,block_stack); + } + + template + inline void deinterleave(data_block block_stack[], + const std::size_t row_count) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = 0; + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_row == row_count) + { + aux_row = 0; + aux_index++; + } + } + } + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + delete[] auxiliary_stack; + } + + template + inline void deinterleave(data_block block_stack[], + const std::size_t row_count, + const std::size_t partial_block_length) + { + if (row_count == 1) return; + + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row1 = 0; + std::size_t aux_index1 = 0; + + std::size_t aux_row2 = 0; + std::size_t aux_index2 = 0; + + for (std::size_t i = 0; i < partial_block_length * row_count; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == row_count) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == block_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t i = 0; aux_index1 < block_length; ++i) + { + auxiliary_stack[aux_row1][aux_index1] = block_stack[aux_row2][aux_index2]; + + if (++aux_row1 == (row_count - 1)) + { + aux_row1 = 0; + aux_index1++; + } + + if (++aux_index2 == block_length) + { + aux_index2 = 0; + aux_row2++; + } + } + + for (std::size_t row = 0; row < row_count - 1; ++row) + { + for (std::size_t index = 0; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + for (std::size_t index = 0; index < partial_block_length; ++index) + { + block_stack[row_count - 1][index] = auxiliary_stack[row_count - 1][index]; + } + + delete[] auxiliary_stack; + } + + template + inline void interleave_columnskip(data_block* block_stack) + { + for (std::size_t i = 0; i < block_length; ++i) + { + for (std::size_t j = i + 1; j < block_length; ++j) + { + std::size_t x1 = i + skip_columns; + std::size_t x2 = j + skip_columns; + + T tmp = block_stack[i][x2]; + block_stack[i][x2] = block_stack[j][x1]; + block_stack[j][x1] = tmp; + } + } + } + + template + inline void interleave_columnskip(data_block* block_stack, const std::size_t& row_count) + { + data_block* auxiliary_stack = new data_block[row_count]; + + std::size_t aux_row = 0; + std::size_t aux_index = skip_columns; + + for (std::size_t index = skip_columns; index < block_length; ++index) + { + for (std::size_t row = 0; row < row_count; ++row) + { + auxiliary_stack[aux_row][aux_index] = block_stack[row][index]; + + if (++aux_index == block_length) + { + aux_index = skip_columns; + aux_row++; + } + } + } + + for (std::size_t row = 0; row < row_count; ++row) + { + for (std::size_t index = skip_columns; index < block_length; ++index) + { + block_stack[row][index] = auxiliary_stack[row][index]; + } + } + + delete[] auxiliary_stack; + } + + template + inline void interleave(T* block_stack[data_length]) + { + for (std::size_t i = 0; i < data_length; ++i) + { + for (std::size_t j = i + 1; j < data_length; ++j) + { + T tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + template + inline void interleave_columnskip(T* block_stack[data_length]) + { + for (std::size_t i = skip_columns; i < data_length; ++i) + { + for (std::size_t j = i + 1; j < data_length; ++j) + { + T tmp = block_stack[i][j]; + block_stack[i][j] = block_stack[j][i]; + block_stack[j][i] = tmp; + } + } + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_product_code.hpp b/modem/fec/schifra_reed_solomon_product_code.hpp new file mode 100644 index 0000000..15f00c4 --- /dev/null +++ b/modem/fec/schifra_reed_solomon_product_code.hpp @@ -0,0 +1,238 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_PRODUCT_CODE_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_PRODUCT_CODE_HPP + + +#include +#include +#include + +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_reed_solomon_interleaving.hpp" +#include "schifra_reed_solomon_bitio.hpp" +#include "schifra_ecc_traits.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + template + class square_product_code_encoder + { + public: + + typedef encoder encoder_type; + typedef block block_type; + typedef traits::reed_solomon_triat trait; + typedef unsigned char data_type; + typedef data_type* data_ptr_type; + + enum { data_size = data_length * data_length }; + enum { total_size = code_length * code_length }; + + square_product_code_encoder(const encoder_type& enc) + : encoder_(enc) + {} + + bool encode(data_ptr_type data) + { + data_ptr_type curr_data_ptr = data; + + for (std::size_t row = 0; row < data_length; ++row, curr_data_ptr += data_length) + { + copy(curr_data_ptr, data_length, block_stack_[row]); + + if (!encoder_.encode(block_stack_[row])) + { + return false; + } + } + + block_type vertical_block; + + for (std::size_t col = 0; col < code_length; ++col) + { + for (std::size_t row = 0; row < data_length; ++row) + { + vertical_block[row] = block_stack_[row][col]; + } + + if (!encoder_.encode(vertical_block)) + { + return false; + } + + for (std::size_t fec_index = 0; fec_index < fec_length; ++fec_index) + { + block_stack_[data_length + fec_index].fec(fec_index) = vertical_block.fec(fec_index); + } + } + + return true; + } + + bool encode_and_interleave(data_ptr_type data) + { + if (!encode(data)) + { + return false; + } + + interleave(block_stack_); + + return true; + } + + void output(data_ptr_type output_data) + { + for (std::size_t row = 0; row < code_length; ++row, output_data += code_length) + { + bitio::convert_symbol_to_data::size>(block_stack_[row].data,output_data,code_length); + } + } + + void clear() + { + for (std::size_t i = 0; i < code_length; ++i) + { + block_stack_[i].clear(); + } + } + + private: + + square_product_code_encoder(const square_product_code_encoder& spce); + square_product_code_encoder& operator=(const square_product_code_encoder& spce); + + block_type block_stack_[code_length]; + const encoder_type& encoder_; + }; + + template + class square_product_code_decoder + { + public: + + typedef decoder decoder_type; + typedef block block_type; + typedef traits::reed_solomon_triat trait; + typedef unsigned char data_type; + typedef data_type* data_ptr_type; + + enum { data_size = data_length * data_length }; + enum { total_size = code_length * code_length }; + + square_product_code_decoder(const decoder_type& decoder) + : decoder_(decoder) + {} + + void decode(data_ptr_type data) + { + copy_proxy(data); + decode_proxy(); + } + + void deinterleave_and_decode(data_ptr_type data) + { + copy_proxy(data); + interleave(block_stack_); + decode_proxy(); + } + + void output(data_ptr_type output_data) + { + for (std::size_t row = 0; row < data_length; ++row, output_data += data_length) + { + bitio::convert_symbol_to_data::size>(block_stack_[row].data,output_data,data_length); + } + } + + void clear() + { + for (std::size_t i = 0; i < code_length; ++i) + { + block_stack_[i].clear(); + } + } + + private: + + square_product_code_decoder(const square_product_code_decoder& spcd); + square_product_code_decoder& operator=(const square_product_code_decoder& spcd); + + void copy_proxy(data_ptr_type data) + { + for (std::size_t row = 0; row < code_length; ++row, data += code_length) + { + bitio::convert_data_to_symbol::size>(data,code_length,block_stack_[row].data); + } + } + + void decode_proxy() + { + bool first_iteration_failure = false; + + for (std::size_t row = 0; row < data_length; ++row) + { + if (!decoder_.decode(block_stack_[row])) + { + first_iteration_failure = true; + } + } + + if (!first_iteration_failure) + { + /* + Either no errors detected or all errors have + been detected and corrected. + */ + return; + } + + block_type vertical_block; + + for (std::size_t col = 0; col < code_length; ++col) + { + for (std::size_t row = 0; row < data_length; ++row) + { + vertical_block[row] = block_stack_[row][col]; + } + + decoder_.decode(vertical_block); + } + } + + block_type block_stack_[code_length]; + const decoder_type& decoder_; + }; + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_reed_solomon_speed_evaluator.hpp b/modem/fec/schifra_reed_solomon_speed_evaluator.hpp new file mode 100644 index 0000000..16ac54c --- /dev/null +++ b/modem/fec/schifra_reed_solomon_speed_evaluator.hpp @@ -0,0 +1,411 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_REED_SOLOMON_SPPED_EVALUATOR_HPP +#define INCLUDE_SCHIFRA_REED_SOLOMON_SPPED_EVALUATOR_HPP + + +#include +#include +#include +#include + +#include "schifra_galois_field.hpp" +#include "schifra_sequential_root_generator_polynomial_creator.hpp" +#include "schifra_reed_solomon_block.hpp" +#include "schifra_reed_solomon_encoder.hpp" +#include "schifra_reed_solomon_decoder.hpp" +#include "schifra_reed_solomon_file_encoder.hpp" +#include "schifra_reed_solomon_file_decoder.hpp" +#include "schifra_error_processes.hpp" +#include "schifra_utilities.hpp" + + +namespace schifra +{ + + namespace reed_solomon + { + + template + void create_messages(const encoder& rs_encoder, + std::vector< block >& original_block_list, + const bool full_test_set = false) + { + const std::size_t data_length = code_length - fec_length; + std::vector message_list; + if (full_test_set) + { + for (unsigned int i = 0; i < 256; ++i) + { + message_list.push_back(std::string(data_length,static_cast(i))); + } + } + else + { + message_list.push_back(std::string(data_length,static_cast(0x00))); + message_list.push_back(std::string(data_length,static_cast(0xAA))); + message_list.push_back(std::string(data_length,static_cast(0xA5))); + message_list.push_back(std::string(data_length,static_cast(0xAC))); + message_list.push_back(std::string(data_length,static_cast(0xCA))); + message_list.push_back(std::string(data_length,static_cast(0x5A))); + message_list.push_back(std::string(data_length,static_cast(0xCC))); + message_list.push_back(std::string(data_length,static_cast(0xF0))); + message_list.push_back(std::string(data_length,static_cast(0x0F))); + message_list.push_back(std::string(data_length,static_cast(0xFF))); + message_list.push_back(std::string(data_length,static_cast(0x92))); + message_list.push_back(std::string(data_length,static_cast(0x6D))); + message_list.push_back(std::string(data_length,static_cast(0x77))); + message_list.push_back(std::string(data_length,static_cast(0x7A))); + message_list.push_back(std::string(data_length,static_cast(0xA7))); + message_list.push_back(std::string(data_length,static_cast(0xE5))); + message_list.push_back(std::string(data_length,static_cast(0xEB))); + } + + std::string tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = static_cast(i); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < data_length; ++i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 1) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + for (int i = data_length - 1; i >= 0; --i) + { + tmp_str[i] = (((i & 0x01) == 0) ? static_cast(i) : 0x00); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0x00)); + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0xFF); + } + + message_list.push_back(tmp_str); + + tmp_str = std::string(data_length,static_cast(0xFF)) ; + + for (std::size_t i = 0; i < (data_length >> 1); ++i) + { + tmp_str[i] = static_cast(0x00); + } + + message_list.push_back(tmp_str); + + for (std::size_t i = 0; i < message_list.size(); ++i) + { + block current_block; + rs_encoder.encode(message_list[i],current_block); + original_block_list.push_back(current_block); + } + } + + template , + typename RSDecoder = decoder, + std::size_t data_length = code_length - fec_length> + struct all_errors_decoder_speed_test + { + public: + + all_errors_decoder_speed_test(const std::size_t prim_poly_size, const unsigned int prim_poly[]) + { + galois::field field(field_descriptor,prim_poly_size,prim_poly); + galois::field_polynomial generator_polynomial(field); + + if ( + !make_sequential_root_generator_polynomial(field, + gen_poly_index, + fec_length, + generator_polynomial) + ) + { + return; + } + + RSEncoder rs_encoder(field,generator_polynomial); + RSDecoder rs_decoder(field,gen_poly_index); + + std::vector< block > original_block; + + create_messages(rs_encoder,original_block); + + std::vector > rs_block; + std::vector block_index_list; + + for (std::size_t block_index = 0; block_index < original_block.size(); ++block_index) + { + for (std::size_t error_count = 1; error_count <= (fec_length >> 1); ++error_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block block = original_block[block_index]; + corrupt_message_all_errors(block,error_count,start_position,1); + rs_block.push_back(block); + block_index_list.push_back(block_index); + } + } + } + + const std::size_t max_iterations = 100; + std::size_t blocks_decoded = 0; + std::size_t block_failures = 0; + + schifra::utils::timer timer; + timer.start(); + + for (std::size_t j = 0; j < max_iterations; ++j) + { + for (std::size_t i = 0; i < rs_block.size(); ++i) + { + if (!rs_decoder.decode(rs_block[i])) + { + std::cout << "Decoding Failure!" << std::endl; + block_failures++; + } + else if (!are_blocks_equivelent(rs_block[i],original_block[block_index_list[i]])) + { + std::cout << "Error Correcting Failure!" << std::endl; + block_failures++; + } + else + blocks_decoded++; + } + } + + timer.stop(); + + double time = timer.time(); + double mbps = ((max_iterations * rs_block.size() * data_length) * 8.0) / (1048576.0 * time); + + print_codec_properties(); + + if (block_failures == 0) + printf("Blocks decoded: %8d Time:%8.3fsec Rate:%8.3fMbps\n", + static_cast(blocks_decoded), + time, + mbps); + else + std::cout << "Blocks decoded: " << blocks_decoded << "\tDecode Failures: " << block_failures <<"\tTime: " << time <<"sec\tRate: " << mbps << "Mbps" << std::endl; + } + + void print_codec_properties() + { + printf("[All Errors Test] Codec: RS(%03d,%03d,%03d) ", + static_cast(code_length), + static_cast(data_length), + static_cast(fec_length)); + } + }; + + template , + typename RSDecoder = decoder, + std::size_t data_length = code_length - fec_length> + struct all_erasures_decoder_speed_test + { + public: + + all_erasures_decoder_speed_test(const std::size_t prim_poly_size, const unsigned int prim_poly[]) + { + galois::field field(field_descriptor,prim_poly_size,prim_poly); + galois::field_polynomial generator_polynomial(field); + + if ( + !make_sequential_root_generator_polynomial(field, + gen_poly_index, + fec_length, + generator_polynomial) + ) + { + return; + } + + RSEncoder rs_encoder(field,generator_polynomial); + RSDecoder rs_decoder(field,gen_poly_index); + + std::vector< block > original_block; + + create_messages(rs_encoder,original_block); + + std::vector > rs_block; + std::vector erasure_list; + std::vector block_index_list; + + for (std::size_t block_index = 0; block_index < original_block.size(); ++block_index) + { + for (std::size_t erasure_count = 1; erasure_count <= fec_length; ++erasure_count) + { + for (std::size_t start_position = 0; start_position < code_length; ++start_position) + { + block block = original_block[block_index]; + erasure_locations_t erasures; + corrupt_message_all_erasures(block,erasures,erasure_count,start_position,1); + + if (erasure_count != erasures.size()) + { + std::cout << "all_erasures_decoder_speed_test() - Failed to properly generate erasures list. Details:"; + std::cout << "(" << block_index << "," << erasure_count << "," << start_position << ")" << std::endl; + } + + rs_block.push_back(block); + erasure_list.push_back(erasures); + block_index_list.push_back(block_index); + } + } + } + + const std::size_t max_iterations = 100; + std::size_t blocks_decoded = 0; + std::size_t block_failures = 0; + + schifra::utils::timer timer; + timer.start(); + + for (std::size_t j = 0; j < max_iterations; ++j) + { + for (std::size_t i = 0; i < rs_block.size(); ++i) + { + if (!rs_decoder.decode(rs_block[i],erasure_list[i])) + { + std::cout << "Decoding Failure!" << std::endl; + block_failures++; + } + else if (!are_blocks_equivelent(rs_block[i],original_block[block_index_list[i]])) + { + std::cout << "Error Correcting Failure!" << std::endl; + block_failures++; + } + else + blocks_decoded++; + } + } + + timer.stop(); + + double time = timer.time(); + double mbps = ((max_iterations * rs_block.size() * data_length) * 8.0) / (1048576.0 * time); + + print_codec_properties(); + + if (block_failures == 0) + printf("Blocks decoded: %8d Time:%8.3fsec Rate:%8.3fMbps\n", + static_cast(blocks_decoded), + time, + mbps); + else + std::cout << "Blocks decoded: " << blocks_decoded << "\tDecode Failures: " << block_failures <<"\tTime: " << time <<"sec\tRate: " << mbps << "Mbps" << std::endl; + } + + void print_codec_properties() + { + printf("[All Erasures Test] Codec: RS(%03d,%03d,%03d) ", + static_cast(code_length), + static_cast(data_length), + static_cast(fec_length)); + } + + }; + + void speed_test_00() + { + all_errors_decoder_speed_test<8,120,255, 2>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 4>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 6>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 8>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 12>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 32>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 48>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 64>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 80>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255, 96>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_errors_decoder_speed_test<8,120,255,128>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + } + + void speed_test_01() + { + all_erasures_decoder_speed_test<8,120,255, 2>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 4>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 6>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 8>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 10>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 12>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 14>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 16>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 18>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 20>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 32>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 48>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 64>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 80>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255, 96>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + all_erasures_decoder_speed_test<8,120,255,128>(galois::primitive_polynomial_size06,galois::primitive_polynomial06); + } + + } // namespace reed_solomon + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_sequential_root_generator_polynomial_creator.hpp b/modem/fec/schifra_sequential_root_generator_polynomial_creator.hpp new file mode 100644 index 0000000..02c9682 --- /dev/null +++ b/modem/fec/schifra_sequential_root_generator_polynomial_creator.hpp @@ -0,0 +1,64 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_SEQUENTIAL_ROOT_GENERATOR_POLYNOMIAL_CREATOR_HPP +#define INCLUDE_SCHIFRA_SEQUENTIAL_ROOT_GENERATOR_POLYNOMIAL_CREATOR_HPP + + +#include + +#include "schifra_galois_field.hpp" +#include "schifra_galois_field_element.hpp" +#include "schifra_galois_field_polynomial.hpp" + + +namespace schifra +{ + + inline bool make_sequential_root_generator_polynomial(const galois::field& field, + const std::size_t initial_index, + const std::size_t num_elements, + galois::field_polynomial& generator_polynomial) + { + if ( + (initial_index >= field.size()) || + ((initial_index + num_elements) > field.size()) + ) + { + return false; + } + + galois::field_element alpha(field, 2); + galois::field_polynomial X = galois::generate_X(field); + generator_polynomial = galois::field_element(field, 1); + + for (std::size_t i = initial_index; i < (initial_index + num_elements); ++i) + { + generator_polynomial *= (X + (alpha ^ static_cast(i))); + } + + return true; + } + +} // namespace schifra + +#endif diff --git a/modem/fec/schifra_utilities.hpp b/modem/fec/schifra_utilities.hpp new file mode 100644 index 0000000..d52844d --- /dev/null +++ b/modem/fec/schifra_utilities.hpp @@ -0,0 +1,198 @@ +/* +(**************************************************************************) +(* *) +(* Schifra *) +(* Reed-Solomon Error Correcting Code Library *) +(* *) +(* Release Version 0.0.1 *) +(* http://www.schifra.com *) +(* Copyright (c) 2000-2020 Arash Partow, All Rights Reserved. *) +(* *) +(* The Schifra Reed-Solomon error correcting code library and all its *) +(* components are supplied under the terms of the General Schifra License *) +(* agreement. The contents of the Schifra Reed-Solomon error correcting *) +(* code library and all its components may not be copied or disclosed *) +(* except in accordance with the terms of that agreement. *) +(* *) +(* URL: http://www.schifra.com/license.html *) +(* *) +(**************************************************************************) +*/ + + +#ifndef INCLUDE_SCHIFRA_UTILITES_HPP +#define INCLUDE_SCHIFRA_UTILITES_HPP + + +#include + +#if defined(_WIN32) || defined(__WIN32__) || defined(WIN32) + #include +#else + #include + #include +#endif + + +namespace schifra +{ + + namespace utils + { + + const std::size_t high_bits_in_char[256] = { + 0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, + 4,5,5,6,5,6,6,7,5,6,6,7,6,7,7,8 + }; + + template + inline std::size_t hamming_distance_element(const T v1, const T v2) + { + std::size_t distance = 0; + const unsigned char* it1 = reinterpret_cast(&v1); + const unsigned char* it2 = reinterpret_cast(&v2); + for (std::size_t i = 0; i < sizeof(T); ++i, ++it1, ++it2) + { + distance += high_bits_in_char[((*it1) ^ (*it2)) & 0xFF]; + } + return distance; + } + + inline std::size_t hamming_distance(const unsigned char data1[], const unsigned char data2[], const std::size_t length) + { + std::size_t distance = 0; + const unsigned char* it1 = data1; + const unsigned char* it2 = data2; + for (std::size_t i = 0; i < length; ++i, ++it1, ++it2) + { + distance += high_bits_in_char[((*it1) ^ (*it2)) & 0xFF]; + } + return distance; + } + + template + inline std::size_t hamming_distance(ForwardIterator it1_begin, ForwardIterator it2_begin, ForwardIterator it1_end) + { + std::size_t distance = 0; + ForwardIterator it1 = it1_begin; + ForwardIterator it2 = it2_begin; + for (; it1 != it1_end; ++it1, ++it2) + { + distance += hamming_distance_element(*it1,*it2); + } + return distance; + } + + class timer + { + public: + + #if defined(_WIN32) || defined(__WIN32__) || defined(WIN32) + timer() + : in_use_(false) + { + QueryPerformanceFrequency(&clock_frequency_); + } + + inline void start() + { + in_use_ = true; + QueryPerformanceCounter(&start_time_); + } + + inline void stop() + { + QueryPerformanceCounter(&stop_time_); + in_use_ = false; + } + + inline double time() const + { + return (1.0 * (stop_time_.QuadPart - start_time_.QuadPart)) / (1.0 * clock_frequency_.QuadPart); + } + + #else + + timer() + : in_use_(false) + { + start_time_.tv_sec = 0; + start_time_.tv_usec = 0; + stop_time_.tv_sec = 0; + stop_time_.tv_usec = 0; + } + + inline void start() + { + in_use_ = true; + gettimeofday(&start_time_,0); + } + + inline void stop() + { + gettimeofday(&stop_time_, 0); + in_use_ = false; + } + + inline unsigned long long int usec_time() const + { + if (!in_use_) + { + if (stop_time_.tv_sec >= start_time_.tv_sec) + { + return 1000000 * (stop_time_.tv_sec - start_time_.tv_sec ) + + (stop_time_.tv_usec - start_time_.tv_usec); + } + else + return std::numeric_limits::max(); + } + else + return std::numeric_limits::max(); + } + + inline double time() const + { + return usec_time() * 0.000001; + } + + #endif + + inline bool in_use() const + { + return in_use_; + } + + private: + + bool in_use_; + + #if defined(_WIN32) || defined(__WIN32__) || defined(WIN32) + LARGE_INTEGER start_time_; + LARGE_INTEGER stop_time_; + LARGE_INTEGER clock_frequency_; + #else + struct timeval start_time_; + struct timeval stop_time_; + #endif + }; + + } // namespace utils + +} // namespace schifra + + +#endif diff --git a/modem/fec_fast.c b/modem/fec_fast.c new file mode 100644 index 0000000..d1fb57b --- /dev/null +++ b/modem/fec_fast.c @@ -0,0 +1,67 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" +#include "liquid.h" + +fec fecobjTX; +fec fecobjRX; +fec_scheme fs = LIQUID_FEC_SECDED3932; // error-correcting scheme +uint8_t encoded_message[UdpBlocklen]; +uint8_t decoded_message[PayloadLen+framenumlen+CRClen]; + +uint8_t *cfec_Reconstruct(uint8_t *darr) +{ + memset(decoded_message,0,(PayloadLen+framenumlen+CRClen)); + fec_decode(fecobjRX, (PayloadLen+framenumlen+CRClen), darr, decoded_message); + + return decoded_message; +} + +uint8_t *GetFEC(uint8_t *txblock, int len) +{ + if(len != (PayloadLen+framenumlen+CRClen)) + { + printf("wrong FEC encode length, len:%d Payloadlen:%d\n",len,PayloadLen); + exit(0); + } + + fec_encode(fecobjTX, len, txblock, encoded_message); + + return encoded_message; +} + +void initFEC() +{ + int n_enc = fec_get_enc_msg_length(fs,(PayloadLen+framenumlen+CRClen)); + if(n_enc != UdpBlocklen) + { + printf("wrong FEC init length\n"); + exit(0); + } + + fecobjTX = fec_create(fs,NULL); + fecobjRX = fec_create(fs,NULL); +} + diff --git a/modem/fft.c b/modem/fft.c new file mode 100644 index 0000000..8bc4f24 --- /dev/null +++ b/modem/fft.c @@ -0,0 +1,140 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" +#include +#include + +#define AUDIOSAMPLERATE 8000 + +double *din = NULL; // input data for fft +fftw_complex *cpout = NULL; // ouput data from fft +fftw_plan plan = NULL; +#define fft_rate (AUDIOSAMPLERATE / 10) // resolution: 10 Hz +int fftidx = 0; +int fftcnt = fft_rate/2+1; // number of output values +uint16_t fftout[AUDIOSAMPLERATE / 10/2+1]; + +uint16_t *make_waterfall(uint8_t *pdata, int len, int *retlen) +{ + int fftrdy = 0; + // get the real sample in float (imag is not required for the FFT) + int re=0; + + // GR sends 8 Bytes containing 4x 0x000003e8 (marker) and 4x input-samples (real integer) + #define dlen 8 + static uint8_t rbuf[dlen]; + + for(int i=0; i 0; sh--) + rbuf[sh] = rbuf[sh - 1]; + rbuf[0] = pdata[i]; + + // check for BIG/LITTLE endian + if(rbuf[0] == 0 && rbuf[1] == 0 && rbuf[2] == 3 && rbuf[3] == 0xe8) + { + re = rbuf[4]; + re <<= 24; + re += rbuf[5]; + re <<= 16; + re += rbuf[6]; + re <<= 8; + re += rbuf[7]; + } + else if(rbuf[0] == 0xe8 && rbuf[1] == 3 && rbuf[2] == 0 && rbuf[3] == 0) + { + re = rbuf[7]; + re <<= 24; + re += rbuf[6]; + re <<= 16; + re += rbuf[5]; + re <<= 8; + re += rbuf[4]; + } + else + continue; + + // the value was scaled in GR by 2^24 = 16777216 + // in order to send it in an INT + // undo this scaling + float fre = (float)re / 16777216; + + // fre are the float samples + // fill into the fft input buffer + din[fftidx++] = fre; + + if(fftidx == fft_rate) + { + fftidx = 0; + + // the fft buffer is full, execute the FFT + fftw_execute(plan); + + for (int j = 0; j < fftcnt; j++) + { + // calculate absolute value (magnitute without phase) + float fre = cpout[j][0]; + float fim = cpout[j][1]; + float mag = sqrt((fre * fre) + (fim * fim)); + + fftout[j] = (uint16_t)mag; + + fftrdy = 1; + } + } + } + + if(fftrdy == 1) + { + *retlen = fftcnt; + return fftout; + } + + return NULL; +} + +void init_fft() +{ +char fn[300]; + + sprintf(fn, "capture_fft_%d", fft_rate); // wisdom file for each capture rate + + fftw_import_wisdom_from_filename(fn); + + din = (double *)fftw_malloc(sizeof(double) * fft_rate); + cpout = (fftw_complex *)fftw_malloc(sizeof(fftw_complex) * fft_rate); + + plan = fftw_plan_dft_r2c_1d(fft_rate, din, cpout, FFTW_MEASURE); + + fftw_export_wisdom_to_filename(fn); +} + +void exit_fft() +{ + if(plan) fftw_destroy_plan(plan); + if(din) fftw_free(din); + if(cpout) fftw_free(cpout); +} diff --git a/modem/frame_packer.c b/modem/frame_packer.c new file mode 100644 index 0000000..542fa89 --- /dev/null +++ b/modem/frame_packer.c @@ -0,0 +1,323 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" + +void Insert(uint8_t bit); +uint8_t *FindDatablock(); + +uint8_t rxbuffer[UDPBLOCKLEN*8/2+100]; // 3...bits per symbol QPSK, enough space also for QPSK and 8PSK, +100 ... reserve, just to be sure +uint8_t rx_status = 0; + +int framecounter = 0; +int lastframenum = 0; + +// header for TX, +uint8_t TXheaderbytes[HEADERLEN] = {0x53, 0xe1, 0xa6}; +// corresponds to these QPSK symbols: +// bits: 01010011 11100001 10100110 +// QPSK: +// syms: 1 1 0 3 3 2 0 1 2 2 1 2 +// 8PSK: +// syms: 2 4 7 6 0 6 4 6 + +// QPSK +// each header has 12 symbols +// we have 4 constellations +uint8_t QPSK_headertab[4][HEADERLEN*8/2]; + +// 8PSK +// each header has 12 symbols +// we have 8 constellations +uint8_t _8PSK_headertab[8][HEADERLEN*8/3]; + +// init header tables +void init_packer() +{ + // create the QPSK symbol table for the HEADER + // in all possible rotations + convertBytesToSyms_QPSK(TXheaderbytes, QPSK_headertab[0], 3); + for(int i=1; i<4; i++) + rotateQPSKsyms(QPSK_headertab[i-1], QPSK_headertab[i], 12); + + // create the 8PSK symbol table for the HEADER + // in all possible rotations + convertBytesToSyms_8PSK(TXheaderbytes, _8PSK_headertab[0], 3); + for(int i=1; i<8; i++) + rotate8PSKsyms(_8PSK_headertab[i-1], _8PSK_headertab[i], 8); +} + +// packs a payload into an udp data block +// the payload has a size of PAYLOADLEN +// type ... inserted in the "frame type information" field +// status ... specifies first/last frame of a data stream +uint8_t *Pack(uint8_t *payload, int type, int status, int *plen) +{ + FRAME frame; // raw frame without fec + + // polulate the raw frame + + // make the frame counter + if(status & (1<<4)) + framecounter = 0; // first block of a stream + else + framecounter++; + + // insert frame counter and status bits + frame.counter_LSB = framecounter & 0xff; + int framecnt_MSB = (framecounter >> 8) & 0x03; // Bit 8+9 of framecounter + frame.status = framecnt_MSB << 6; + frame.status += ((status & 0x03)<<4); + frame.status += (type & 0x0f); + + // insert the payload + memcpy(frame.payload, payload, PAYLOADLEN); + + // calculate and insert the CRC16 + uint16_t crc16 = Crc16_messagecalc(CRC16TX,(uint8_t *)(&frame), CRCSECUREDLEN); + frame.crc16_MSB = (uint8_t)(crc16 >> 8); + frame.crc16_LSB = (uint8_t)(crc16 & 0xff); + + // make the final arry for transmission + static uint8_t txblock[UDPBLOCKLEN]; + + // calculate the fec and insert into txblock (leave space for the header) + GetFEC((uint8_t *)(&frame), DATABLOCKLEN, txblock+HEADERLEN); + + // scramble + TX_Scramble(txblock+HEADERLEN, FECBLOCKLEN); // scramble all data + + // insert the header + memcpy(txblock,TXheaderbytes,HEADERLEN); + + /* test pattern + * for(int i=0; i>6; // frame counter MSB + framenumrx <<= 8; + framenumrx += frame.counter_LSB; // frame counter LSB + //printf("Frame no.: %d\n",framenumrx); + if (lastframenum != framenumrx) rx_status |= 4; + lastframenum = framenumrx; + if (++lastframenum >= 1024) lastframenum = 0; // 1024 = 2^10 (10 bit frame number) + + // extract information and build the string for the application + // we have 10 Management Byte then the payload follows + static uint8_t payload[PAYLOADLEN+10]; + payload[0] = frame.status & 0x0f; // frame type + payload[1] = (frame.status & 0xc0)>>6; // frame counter MSB + payload[2] = frame.counter_LSB; // frame counter LSB + payload[3] = (frame.status & 0x30)>>4; // first/last frame marker + payload[4] = rx_status; // frame lost information + payload[5] = speed >> 8; // measured line speed + payload[6] = speed; + payload[7] = 0; // free for later use + payload[8] = 0; + payload[9] = 0; + + memcpy(payload+10,frame.payload,PAYLOADLEN); + + return payload; +} diff --git a/modem/frameformat.h b/modem/frameformat.h new file mode 100644 index 0000000..83bf06c --- /dev/null +++ b/modem/frameformat.h @@ -0,0 +1,87 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +/* + * The total length of the FEC-secured part is 255, + * this is a requirement of the Shifra FEC routine, which + * is the best FEC that I have seen so far, highly recommended +*/ + +// total "on the air" frame size +// the total length must be a multiple of 2 and 3, so QPSK and 8PSK symbols fit into full bytes +// this is the case with a total length of 258 +#define HEADERLEN 3 +#define FECBLOCKLEN 255 +#define UDPBLOCKLEN (HEADERLEN + FECBLOCKLEN) + +/* !!! IMPORTANT for GNU RADIO !!! + * the UDP payload size for TX MUST be exactly UDPBLOCKLEN (258 in this case) or + * the transmitter will not align bits to symbols correctly ! + * + * RX payload size is not that important. But the currect size for + * QPSK is UDPBLOCKLEN*8/2 = 1032 and for 8PSK UDPBLOCKLEN*8/3 = 688 + * so we can use 344 which are 2 blocks for 8PSK and 3 blocks for QPSK + * */ + +// size of the elements inside an FECblock +// sum must be 255 +#define FECLEN 32 // supported: 16,32,64,128 +#define STATUSLEN 2 +#define CRCLEN 2 +#define PAYLOADLEN (FECBLOCKLEN - FECLEN - CRCLEN - STATUSLEN) +#define CRCSECUREDLEN (PAYLOADLEN + STATUSLEN) +#define DATABLOCKLEN (PAYLOADLEN + CRCLEN + STATUSLEN) + + +// the header is not FEC secured therefore we give some room for bit +// errors. Only 24 out of the 32 bits must be correct for +// a valid frame detection +extern uint8_t header[HEADERLEN]; + +typedef struct { + // the total size of the following data must be 255 - 32 = 223 bytes + // the FEC is calculated on FRAME with a length of 223 and returns + // a data block with length 255. + + // we use a 10 bits frame counter -> 1024 values + // so we can transmit a data block with a maximum + // size of 255 * 1024 = 261kByte. With the maximum modem speed + // this would be a transmission time of 5,8 minutes which + // is more then enough for a single data block + uint8_t counter_LSB; // lower 8 bits of the frame counter + + // the status byte contains these information: + // bit 0..3 : 4 bit (16 values) frame type information + // bit 4 : first frame of a block if "1" + // bit 5 : last frame of a block if "1" + // bit 6..7 : MSB of the frame counter + uint8_t status; + + // payload + uint8_t payload[PAYLOADLEN]; + + // CRC16 + uint8_t crc16_MSB; + uint8_t crc16_LSB; +} FRAME; diff --git a/modem/install_gnuradio_ubuntu b/modem/install_gnuradio_ubuntu new file mode 100755 index 0000000..f8f7001 --- /dev/null +++ b/modem/install_gnuradio_ubuntu @@ -0,0 +1,6 @@ +sudo apt-get update +sudo apt-get install software-properties-common +sudo add-apt-repository ppa:gnuradio/gnuradio-releases +sudo apt-get update +sudo apt-get install gnuradio +sudo ldconfig diff --git a/modem/main_helper.c b/modem/main_helper.c new file mode 100644 index 0000000..9b39b1a --- /dev/null +++ b/modem/main_helper.c @@ -0,0 +1,93 @@ +/* + * main_helper + * =========== + * by DJ0ABR + * + * functions useful for every main() program start + * + * */ + +#include "qo100modem.h" + +// check if it is already running +int isRunning(char *prgname) +{ + int num = 0; + char s[256]; + sprintf(s,"ps -e | grep %s",prgname); + + FILE *fp = popen(s,"r"); + if(fp) + { + // gets the output of the system command + while (fgets(s, sizeof(s)-1, fp) != NULL) + { + if(strstr(s,prgname) && !strstr(s,"grep")) + { + if(++num == 2) + { + printf("%s is already running, do not start twice !",prgname); + pclose(fp); + return 1; + } + } + } + pclose(fp); + } + return 0; +} + +// signal handler +void sighandler(int signum) +{ + printf("program stopped by signal\n"); + stopModem(); + exit_fft(); + keeprunning = 0; + close(BC_sock_AppToModem); +} + +void install_signal_handler() +{ + + // signal handler, mainly used if the user presses Ctrl-C + struct sigaction sigact; + sigact.sa_handler = sighandler; + sigemptyset(&sigact.sa_mask); + sigact.sa_flags = 0; + sigaction(SIGINT, &sigact, NULL); + sigaction(SIGTERM, &sigact, NULL); + sigaction(SIGQUIT, &sigact, NULL); + sigaction(SIGABRT, &sigact, NULL); // assert() error + + //sigaction(SIGSEGV, &sigact, NULL); + + // switch off signal 13 (broken pipe) + // instead handle the return value of the write or send function + signal(SIGPIPE, SIG_IGN); +} + +int run_console_program(char *cmd) +{ + printf("executing: %s\n",cmd); + int ret = system(cmd); + if(ret){} + + return 0; +} + +void showbytestring(char *title, uint8_t *data, int anz) +{ + printf("%s. Len %d: ",title,anz); + for(int i=0; i +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +int isRunning(char *prgname); +void install_signal_handler(); +void sighandler(int signum); +int run_console_program(char *cmd); diff --git a/modem/qo100modem.c b/modem/qo100modem.c new file mode 100644 index 0000000..c563c25 --- /dev/null +++ b/modem/qo100modem.c @@ -0,0 +1,592 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" + +int Open_BC_Socket(); +void startModem(); +void stopModem(); +void getMyIP(); +void bc_rxdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock); +void appdata_rxdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock); +void GRdata_rxdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock); +void GRdata_FFTdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock); +void GRdata_I_Qdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock); + +// threads will exit if set to 0 +int keeprunning = 1; + +// UDP I/O +int BC_sock_AppToModem = -1; +int DATA_sock_AppToModem = -1; +int DATA_sock_from_GR = -1; +int DATA_sock_FFT_from_GR = -1; +int DATA_sock_I_Q_from_GR = -1; + +int UdpBCport_AppToModem = 40131; +int UdpDataPort_AppToModem = 40132; +int UdpDataPort_ModemToApp = 40133; + +int UdpDataPort_toGR = 40134; +int UdpDataPort_fromGR = 40135; +int UdpDataPort_fromGR_FFT = 40136; +int UdpDataPort_fromGR_I_Q = 40137; + +// op mode depending values +// default mode if not set by the app +int speedmode = 4; +int bitsPerSymbol = 2; // QPSK=2, 8PSK=3 +int constellationSize = 4; // QPSK=4, 8PSK=8 + + +char localIP[]={"127.0.0.1"}; +char ownfilename[]={"qo100modem"}; +char myIP[20]; +char appIP[20] = {0}; +int fixappIP = 0; +int restart_modems = 0; +int doNotLoadModems = 0; + +int main(int argc, char *argv[]) +{ +int opt = 0; +char *modemip = NULL; + + while ((opt = getopt(argc, argv, "m:e:")) != -1) + { + switch(opt) + { + case 'e': + doNotLoadModems = 1; + break; + case 'm': + modemip = optarg; + memset(appIP,0,20); + int len = strlen(modemip); + if(len < 16) + { + memcpy(appIP,modemip,len); + fixappIP = 1; + printf("Application IP set to: %s\n",modemip); + } + else + { + printf("invalid Application IP: %s\n",modemip); + exit(0); + } + break; + } + } + + if(isRunning(ownfilename) == 1) + exit(0); + + install_signal_handler(); + + init_packer(); + initFEC(); + init_fft(); + + // start udp RX to listen for broadcast search message from Application + UdpRxInit(&BC_sock_AppToModem, UdpBCport_AppToModem, &bc_rxdata, &keeprunning); + + // start udp RX for data from application + UdpRxInit(&DATA_sock_AppToModem, UdpDataPort_AppToModem, &appdata_rxdata, &keeprunning); + + // start udp RX to listen for data from GR Receiver + UdpRxInit(&DATA_sock_from_GR, UdpDataPort_fromGR, &GRdata_rxdata, &keeprunning); + + // start udp RX to listen for Audio-Samples (FFT) data from GR Receiver + UdpRxInit(&DATA_sock_FFT_from_GR, UdpDataPort_fromGR_FFT, &GRdata_FFTdata, &keeprunning); + + // start udp RX to listen for IQ data from GR Receiver + UdpRxInit(&DATA_sock_I_Q_from_GR, UdpDataPort_fromGR_I_Q, &GRdata_I_Qdata, &keeprunning); + + getMyIP(); + + printf("QO100modem initialised and running\n"); + + while (keeprunning) + { + if(restart_modems == 1) + { + stopModem(); + startModem(); + restart_modems = 0; + } + + doArraySend(); + + usleep(100); + } + printf("stopped: %d\n",keeprunning); + + close(BC_sock_AppToModem); + + return 0; +} + +typedef struct { + int audio; + int tx; + int rx; +} SPEEDRATE; + +SPEEDRATE sr[9] = { + // QPSK modes + {48000, 32, 8}, // AudioRate, TX-Resampler, RX-Resampler/4 + {44100, 28, 7}, // see samprate.ods + {44100, 24, 6}, + {48000, 24, 6}, + {44100, 20, 5}, + {48000, 20, 5}, + + // 8PSK modes + {44100, 24, 6}, + {48000, 24, 6} +}; + +void startModem() +{ +char stx[512]; +char srx[512]; + + if(speedmode >= 0 && speedmode <=5) + { + bitsPerSymbol = 2; // QPSK=2, 8PSK=3 + constellationSize = (1<= 6 && speedmode <=7) + { + bitsPerSymbol = 3; // QPSK=2, 8PSK=3 + constellationSize = (1<= 0 && speedmode <=5) + { + sprintf(stx,"python3 qpsk_tx.py -r %d -s %d &",sr[speedmode].tx,sr[speedmode].audio); + sprintf(srx,"python3 qpsk_rx.py -r %d -s %d &",sr[speedmode].rx,sr[speedmode].audio); + } + else if(speedmode >= 6 && speedmode <=7) + { + sprintf(stx,"python3 tx_8psk.py -r %d -s %d &",sr[speedmode].tx,sr[speedmode].audio); + sprintf(srx,"python3 rx_8psk.py -r %d -s %d &",sr[speedmode].rx,sr[speedmode].audio); + } + else + { + printf("wrong modem number\n"); + exit(0); + } + + // the TX modem needs the local IP address as a parameter -i ip + if(run_console_program(stx) == -1) + { + printf("cannot start TX modem\n"); + exit(0); + } + + // the RX modem needs the app's IP address as a parameter -i ip + if(run_console_program(srx) == -1) + { + printf("cannot start RX modem\n"); + exit(0); + } +} + +void stopModem() +{ + if(doNotLoadModems == 1) return; + printf("stop modem\n"); + int ret = system("killall python3"); + if(ret){} + // wait until stop job is done + sleep(1); +} + +void getMyIP() +{ + struct ifaddrs *ifaddr, *ifa; + int s; + char host[NI_MAXHOST]; + + if (getifaddrs(&ifaddr) == -1) + { + printf("getifaddrs error\n"); + exit(0); + } + + + ifa = ifaddr; + while(ifa) + { + s=getnameinfo(ifa->ifa_addr,sizeof(struct sockaddr_in),host, NI_MAXHOST, NULL, 0, NI_NUMERICHOST); + + if(ifa->ifa_addr->sa_family==AF_INET) + { + if (s != 0) + { + printf("getnameinfo() failed: %s\n", gai_strerror(s)); + exit(0); + } + strcpy(myIP, host); + if(strncmp(host,"127",3) != 0) + break; + } + + ifa = ifa->ifa_next; + } + + freeifaddrs(ifaddr); + + return; +} + +// called from UDP RX thread for Broadcast-search from App +void bc_rxdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock) +{ + if (len > 0 && pdata[0] == 0x3c) + { + char rxip[20]; + strcpy(rxip,inet_ntoa(rxsock->sin_addr)); + + if(fixappIP == 0) + { + if(strcmp(appIP,rxip)) + { + printf("new app IP: %s, restarting modems\n",rxip); + restart_modems = 1; + } + strcpy(appIP,rxip); + //printf("app (%s) is searching modem. Sending modem IP to the app\n",appIP); + // App searches for the modem IP, mirror the received messages + // so the app gets an UDP message with this local IP + pdata[0] = 3; + sendUDP(appIP,UdpDataPort_ModemToApp,pdata,1); + } + else + { + // appIP is fixed, answer only to this IP + if(!strcmp(appIP,rxip)) + { + //printf("app (%s) is searching modem. Sending modem IP to the app\n",appIP); + restart_modems = 1; + // App searches for the modem IP, mirror the received messages + // so the app gets an UDP message with this local IP + pdata[0] = 3; + sendUDP(appIP,UdpDataPort_ModemToApp,pdata,1); + } + } + } +} + +// called by UDP RX thread for data from App +void appdata_rxdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock) +{ + uint8_t type = pdata[0]; + uint8_t minfo = pdata[1]; + + if(len != (PAYLOADLEN+2)) + { + printf("data from app: wrong length:%d (should be %d)\n",len-2,PAYLOADLEN); + return; + } + + // type values: see oscardata config.cs: frame types + if(type == 16) + { + // Byte 1 contains the resampler ratio for TX and RX modem + speedmode = pdata[1]; + printf("set speedmode to %d\n",speedmode); + restart_modems = 1; + return; + } + + if(type == 17) + { + // auto send file + // TODO + + // for testing only: + // simulate sending a text file with 1kB length + int testlen = 100000; + uint8_t arr[testlen]; + char c = 'A'; + for(int i=0; i'Z') c='A'; + } + arraySend(arr, testlen, 3, (char *)"testfile.txt"); + return; + } + if(type == 18) + { + // auto send folder + // TODO + } + + if(type == 19) + { + // shut down this modem PC + int r = system("sudo shutdown now"); + exit(r); + } + + if(getSending() == 1) return; // already sending (Array sending) + + if(minfo == 0) + { + toGR_Preamble(); // first transmission of a data block, send preamble + toGR_sendData(pdata+2, type, minfo); + } + else if((len-2) < PAYLOADLEN) + { + // if not enough data for a full payload add Zeros + uint8_t payload[PAYLOADLEN]; + memset(payload,0,PAYLOADLEN); + memcpy(payload,pdata+2,len-2); + toGR_sendData(payload, type, minfo); + } + else + { + toGR_sendData(pdata+2, type, minfo); + } +} + +void toGR_Preamble() +{ + srand(123); + // send random data, rx can sync + uint8_t data[UDPBLOCKLEN]; + + // 1byte 1,8ms (about 2ms) + int timeforframe = 2 * UDPBLOCKLEN; // 160 ms + int repeats = 8000 /timeforframe; // for 8000ms = 8s + + for(int i=0; i= 1s + meansumbytes += len; + if(ts < 5000000) + { + // do not measure + return; + } + + // ts ... time in us since last measurement + // divide by the number of bits + ts /= (meansumbytes*8); // time for one bit + int tbit = (int)ts; + int sp1 = 1000000/tbit; + // convert speed of symbols to speed of bits + speed = sp1 * bitsPerSymbol / 8; + + int mean = 0; + if(sparr[0] == -1) + { + for(int i=0; i0; i--) + sparr[i] = sparr[i-1]; + sparr[0] = speed; + } + + for(int i=0; i (10000*2+1)) + { + printf("txpl too small !!!\n"); + return; + } + + int bidx = 0; + txpl[bidx++] = 4; // type 4: FFT data follows + + for(int i=0; i> 8; + txpl[bidx++] = fft[i]; + } + sendUDP(appIP,UdpDataPort_ModemToApp,txpl,bidx); + } +} + +uint8_t lastb[12]; + +void display_IQ(uint8_t *pdata, int len) +{ + for (int i = 0; i < len; i++) + { + // insert new byte in lastb + for (int sh = 12 - 1; sh > 0; sh--) + lastb[sh] = lastb[sh - 1]; + lastb[0] = pdata[i]; + + // test if aligned + // for PC + if (lastb[0] == 0 && lastb[1] == 0 && lastb[2] == 3 && lastb[3] == 0xe8) + { + // we are aligned to a re value + int re = lastb[4]; + re <<= 8; + re += lastb[5]; + re <<= 8; + re += lastb[6]; + re <<= 8; + re += lastb[7]; + + int im = lastb[8]; + im <<= 8; + im += lastb[9]; + im <<= 8; + im += lastb[10]; + im <<= 8; + im += lastb[11]; + + double fre = (double)re / 16777216; + double fim = (double)im / 16777216; + printf("re: %f im: %f\n",fre,fim); + + } + // and for ARM + else if (lastb[0] == 0xe8 && lastb[1] == 3 && lastb[2] == 0 && lastb[3] == 0) + { + // we are aligned to a re value + int re = lastb[7]; + re <<= 8; + re += lastb[6]; + re <<= 8; + re += lastb[5]; + re <<= 8; + re += lastb[4]; + + int im = lastb[11]; + im <<= 8; + im += lastb[10]; + im <<= 8; + im += lastb[9]; + im <<= 8; + im += lastb[8]; + + double fre = (double)re / 16777216; + double fim = (double)im / 16777216; + printf("ARM re: %f im: %f\n",fre,fim); + } + } +} + +// called by UDP RX thread for IQ data from GR +void GRdata_I_Qdata(uint8_t *pdata, int len, struct sockaddr_in* rxsock) +{ + // these data are floats multiplied by 2^24 and then converted to int + // for testing convert it back and display it + //display_IQ(pdata,len); + + // send the data "as is" to app + uint8_t txpl[len+1]; + memcpy(txpl+1,pdata,len); + txpl[0] = 5; // type 5: IQ data follows + sendUDP(appIP,UdpDataPort_ModemToApp,txpl,len+1); +} diff --git a/modem/qo100modem.h b/modem/qo100modem.h new file mode 100644 index 0000000..e55c197 --- /dev/null +++ b/modem/qo100modem.h @@ -0,0 +1,90 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "frameformat.h" +#include "main_helper.h" +#include "udp.h" + +#define jpg_tempfilename "rxdata.jpg" + +#define CRC16TX 0 +#define CRC16RX 1 +#define CRC16FILE 2 + +void stopModem(); +uint8_t *unpack_data(uint8_t *rxd, int len); +void TX_Scramble(uint8_t *data, int len); +uint8_t *RX_Scramble(uint8_t *data, int len); +uint16_t Crc16_messagecalc(int rxtx, uint8_t *data,int len); +uint32_t crc32_messagecalc(int txrx, unsigned char *data, int len); +int cfec_Reconstruct(uint8_t *darr, uint8_t *destination); +uint8_t *Pack(uint8_t *payload, int type, int status, int *plen); +void GetFEC(uint8_t *txblock, int len, uint8_t *destArray); +void initFEC(); +void toGR_Preamble(); +void toGR_sendData(uint8_t *data, int type, int status); +uint16_t *make_waterfall(uint8_t *pdata, int len, int *retlen); +void init_fft(); +void exit_fft(); +uint8_t *convertQPSKSymToBytes(uint8_t *rxsymbols); +uint8_t *convert8PSKSymToBytes(uint8_t *rxsymbols, int len); +uint8_t *getPayload(uint8_t *rxb); +void showbytestring(char *title, uint8_t *data, int anz); +void init_packer(); +void convertBytesToSyms_QPSK(uint8_t *bytes, uint8_t *syms, int bytenum); +void rotateQPSKsyms(uint8_t *src, uint8_t *dst, int len); +uint8_t * rotateBackQPSK(uint8_t *buf, int len, int rotations); +void convertBytesToSyms_8PSK(uint8_t *bytes, uint8_t *syms, int bytenum); +void rotate8PSKsyms(uint8_t *src, uint8_t *dst, int len); +uint8_t * rotateBack8PSK(uint8_t *buf, int len, int rotations); +void setSending(uint8_t onoff); +void toGR_Preamble(); +int getSending(); +void doArraySend(); +int arraySend(uint8_t *data, int length, uint8_t type, char *filename); +void shiftleft(uint8_t *data, int shiftnum, int len); +void showbytestring16(char *title, uint16_t *data, int anz); + + +extern int keeprunning; +extern int BC_sock_AppToModem; +extern int speed; +extern int speedmode; +extern int bitsPerSymbol; +extern int constellationSize; + + +/* + * Constellation as produced by the GR Constellation Decoder: + * + * 0 ... +1+1j + * 1 ... -1+1j + * 2 ... -1-1j + * 3 ... +1-1j + * + * + * */ diff --git a/modem/qpsk_rx.py b/modem/qpsk_rx.py new file mode 100755 index 0000000..8dab9df --- /dev/null +++ b/modem/qpsk_rx.py @@ -0,0 +1,209 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +# +# SPDX-License-Identifier: GPL-3.0 +# +# GNU Radio Python Flow Graph +# Title: QPSK RX-Modem +# Author: DJ0ABR +# Description: works with Gnu Radio 3.8.xxx +# GNU Radio version: 3.8.2.0 + +from gnuradio import analog +from gnuradio import audio +from gnuradio import blocks +from gnuradio import digital +from gnuradio import filter +from gnuradio.filter import firdes +from gnuradio import gr +import sys +import signal +from argparse import ArgumentParser +from gnuradio.eng_arg import eng_float, intx +from gnuradio import eng_notation + + +class qpsk_rx(gr.top_block): + + def __init__(self, resamp=5, samp_rate=44100): + gr.top_block.__init__(self, "QPSK RX-Modem") + + ################################################## + # Parameters + ################################################## + self.resamp = resamp + self.samp_rate = samp_rate + + ################################################## + # Variables + ################################################## + self.sps = sps = 4 + self.qpsk__constellation = qpsk__constellation = digital.constellation_rect([0.707+0.707j, -0.707+0.707j, -0.707-0.707j, 0.707-0.707j], [0, 1, 2, 3], + 4, 2, 2, 1, 1).base() + self.qpsk__constellation.gen_soft_dec_lut(8) + self.outputsps = outputsps = 7 + self.nfilts = nfilts = 32 + self.mixf = mixf = 1500 + + ################################################## + # Blocks + ################################################## + self.mmse_resampler_xx_1 = filter.mmse_resampler_cc(0, resamp) + self.mmse_resampler_xx_0 = filter.mmse_resampler_ff(0, samp_rate / 8000) + self.low_pass_filter_0 = filter.fir_filter_fff( + 1, + firdes.low_pass( + 8, + samp_rate, + 3500, + 3100, + firdes.WIN_HAMMING, + 6.76)) + self.digital_pfb_clock_sync_xxx_0 = digital.pfb_clock_sync_ccf(sps, 0.1, firdes.root_raised_cosine(nfilts, nfilts, 1.0/float(sps), 0.35, 11*sps*nfilts), nfilts, nfilts/2, 1.5, outputsps) + self.digital_lms_dd_equalizer_cc_0 = digital.lms_dd_equalizer_cc(15, 0.01, outputsps, qpsk__constellation) + self.digital_costas_loop_cc_0 = digital.costas_loop_cc(0.06, 4, False) + self.digital_constellation_decoder_cb_0 = digital.constellation_decoder_cb(qpsk__constellation) + self.blocks_udp_sink_0_0_0 = blocks.udp_sink(gr.sizeof_int*1, '127.0.0.1', 40137, 120, False) + self.blocks_udp_sink_0_0 = blocks.udp_sink(gr.sizeof_int*1, '127.0.0.1', 40136, 120, False) + self.blocks_udp_sink_0 = blocks.udp_sink(gr.sizeof_char*1, '127.0.0.1', 40135, 344, False) + self.blocks_multiply_xx_0_1_0 = blocks.multiply_vff(1) + self.blocks_multiply_xx_0_1 = blocks.multiply_vff(1) + self.blocks_multiply_xx_0_0_0 = blocks.multiply_vff(1) + self.blocks_interleave_0_0 = blocks.interleave(gr.sizeof_int*1, 1) + self.blocks_interleave_0 = blocks.interleave(gr.sizeof_int*1, 1) + self.blocks_float_to_int_0_1 = blocks.float_to_int(1, 1) + self.blocks_float_to_int_0_0 = blocks.float_to_int(1, 16777216) + self.blocks_float_to_int_0 = blocks.float_to_int(1, 16777216) + self.blocks_float_to_complex_0 = blocks.float_to_complex(1) + self.blocks_complex_to_float_1 = blocks.complex_to_float(1) + self.blocks_complex_to_float_0 = blocks.complex_to_float(1) + self.audio_source_0 = audio.source(samp_rate, '', True) + self.analog_sig_source_x_0_0_0 = analog.sig_source_c(samp_rate, analog.GR_COS_WAVE, mixf, 1, 0, 0) + self.analog_const_source_x_0_1 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, 16777216) + self.analog_const_source_x_0_0 = analog.sig_source_i(0, analog.GR_CONST_WAVE, 0, 0, 1000) + self.analog_const_source_x_0 = analog.sig_source_i(0, analog.GR_CONST_WAVE, 0, 0, 1000) + self.analog_agc2_xx_0_0 = analog.agc2_cc(0.01, 0.2, 1, 1) + self.analog_agc2_xx_0_0.set_max_gain(3) + + + + ################################################## + # Connections + ################################################## + self.connect((self.analog_agc2_xx_0_0, 0), (self.digital_costas_loop_cc_0, 0)) + self.connect((self.analog_const_source_x_0, 0), (self.blocks_interleave_0, 0)) + self.connect((self.analog_const_source_x_0_0, 0), (self.blocks_interleave_0_0, 0)) + self.connect((self.analog_const_source_x_0_1, 0), (self.blocks_multiply_xx_0_1_0, 1)) + self.connect((self.analog_sig_source_x_0_0_0, 0), (self.blocks_complex_to_float_1, 0)) + self.connect((self.audio_source_0, 0), (self.low_pass_filter_0, 0)) + self.connect((self.audio_source_0, 0), (self.mmse_resampler_xx_0, 0)) + self.connect((self.blocks_complex_to_float_0, 0), (self.blocks_float_to_int_0, 0)) + self.connect((self.blocks_complex_to_float_0, 1), (self.blocks_float_to_int_0_0, 0)) + self.connect((self.blocks_complex_to_float_1, 1), (self.blocks_multiply_xx_0_0_0, 1)) + self.connect((self.blocks_complex_to_float_1, 0), (self.blocks_multiply_xx_0_1, 1)) + self.connect((self.blocks_float_to_complex_0, 0), (self.mmse_resampler_xx_1, 0)) + self.connect((self.blocks_float_to_int_0, 0), (self.blocks_interleave_0_0, 1)) + self.connect((self.blocks_float_to_int_0_0, 0), (self.blocks_interleave_0_0, 2)) + self.connect((self.blocks_float_to_int_0_1, 0), (self.blocks_interleave_0, 1)) + self.connect((self.blocks_interleave_0, 0), (self.blocks_udp_sink_0_0, 0)) + self.connect((self.blocks_interleave_0_0, 0), (self.blocks_udp_sink_0_0_0, 0)) + self.connect((self.blocks_multiply_xx_0_0_0, 0), (self.blocks_float_to_complex_0, 0)) + self.connect((self.blocks_multiply_xx_0_1, 0), (self.blocks_float_to_complex_0, 1)) + self.connect((self.blocks_multiply_xx_0_1_0, 0), (self.blocks_float_to_int_0_1, 0)) + self.connect((self.digital_constellation_decoder_cb_0, 0), (self.blocks_udp_sink_0, 0)) + self.connect((self.digital_costas_loop_cc_0, 0), (self.blocks_complex_to_float_0, 0)) + self.connect((self.digital_costas_loop_cc_0, 0), (self.digital_constellation_decoder_cb_0, 0)) + self.connect((self.digital_lms_dd_equalizer_cc_0, 0), (self.analog_agc2_xx_0_0, 0)) + self.connect((self.digital_pfb_clock_sync_xxx_0, 0), (self.digital_lms_dd_equalizer_cc_0, 0)) + self.connect((self.low_pass_filter_0, 0), (self.blocks_multiply_xx_0_0_0, 0)) + self.connect((self.low_pass_filter_0, 0), (self.blocks_multiply_xx_0_1, 0)) + self.connect((self.mmse_resampler_xx_0, 0), (self.blocks_multiply_xx_0_1_0, 0)) + self.connect((self.mmse_resampler_xx_1, 0), (self.digital_pfb_clock_sync_xxx_0, 0)) + + + def get_resamp(self): + return self.resamp + + def set_resamp(self, resamp): + self.resamp = resamp + self.mmse_resampler_xx_1.set_resamp_ratio(self.resamp) + + def get_samp_rate(self): + return self.samp_rate + + def set_samp_rate(self, samp_rate): + self.samp_rate = samp_rate + self.analog_sig_source_x_0_0_0.set_sampling_freq(self.samp_rate) + self.low_pass_filter_0.set_taps(firdes.low_pass(8, self.samp_rate, 3500, 3100, firdes.WIN_HAMMING, 6.76)) + self.mmse_resampler_xx_0.set_resamp_ratio(self.samp_rate / 8000) + + def get_sps(self): + return self.sps + + def set_sps(self, sps): + self.sps = sps + self.digital_pfb_clock_sync_xxx_0.update_taps(firdes.root_raised_cosine(self.nfilts, self.nfilts, 1.0/float(self.sps), 0.35, 11*self.sps*self.nfilts)) + + def get_qpsk__constellation(self): + return self.qpsk__constellation + + def set_qpsk__constellation(self, qpsk__constellation): + self.qpsk__constellation = qpsk__constellation + + def get_outputsps(self): + return self.outputsps + + def set_outputsps(self, outputsps): + self.outputsps = outputsps + + def get_nfilts(self): + return self.nfilts + + def set_nfilts(self, nfilts): + self.nfilts = nfilts + self.digital_pfb_clock_sync_xxx_0.update_taps(firdes.root_raised_cosine(self.nfilts, self.nfilts, 1.0/float(self.sps), 0.35, 11*self.sps*self.nfilts)) + + def get_mixf(self): + return self.mixf + + def set_mixf(self, mixf): + self.mixf = mixf + self.analog_sig_source_x_0_0_0.set_frequency(self.mixf) + + + + +def argument_parser(): + description = 'works with Gnu Radio 3.8.xxx' + parser = ArgumentParser(description=description) + parser.add_argument( + "-r", "--resamp", dest="resamp", type=intx, default=5, + help="Set resamp [default=%(default)r]") + parser.add_argument( + "-s", "--samp-rate", dest="samp_rate", type=intx, default=44100, + help="Set samp_rate [default=%(default)r]") + return parser + + +def main(top_block_cls=qpsk_rx, options=None): + if options is None: + options = argument_parser().parse_args() + tb = top_block_cls(resamp=options.resamp, samp_rate=options.samp_rate) + + def sig_handler(sig=None, frame=None): + tb.stop() + tb.wait() + + sys.exit(0) + + signal.signal(signal.SIGINT, sig_handler) + signal.signal(signal.SIGTERM, sig_handler) + + tb.start() + + tb.wait() + + +if __name__ == '__main__': + main() diff --git a/modem/qpsk_tx.py b/modem/qpsk_tx.py new file mode 100755 index 0000000..2530051 --- /dev/null +++ b/modem/qpsk_tx.py @@ -0,0 +1,140 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +# +# SPDX-License-Identifier: GPL-3.0 +# +# GNU Radio Python Flow Graph +# Title: QPSK TX-Modem +# Author: DJ0ABR +# Copyright: DJ0ABR +# Description: requires GNU Radio 3.8xxx +# GNU Radio version: 3.8.2.0 + +from gnuradio import analog +from gnuradio import audio +from gnuradio import blocks +from gnuradio import digital +from gnuradio import gr +from gnuradio.filter import firdes +import sys +import signal +from argparse import ArgumentParser +from gnuradio.eng_arg import eng_float, intx +from gnuradio import eng_notation + + +class qpsk_tx(gr.top_block): + + def __init__(self, resamprate=20, samp_rate=44100): + gr.top_block.__init__(self, "QPSK TX-Modem ") + + ################################################## + # Parameters + ################################################## + self.resamprate = resamprate + self.samp_rate = samp_rate + + ################################################## + # Variables + ################################################## + self.qpsk__constellation = qpsk__constellation = digital.constellation_rect([1+1j, -1+1j, -1-1j, 1-1j], [0, 1, 2, 3], + 4, 2, 2, 1, 1).base() + self.mixf = mixf = 1500 + + ################################################## + # Blocks + ################################################## + self.digital_constellation_modulator_0 = digital.generic_mod( + constellation=qpsk__constellation, + differential=False, + samples_per_symbol=resamprate, + pre_diff_code=True, + excess_bw=0.35, + verbose=False, + log=False) + self.blocks_udp_source_0 = blocks.udp_source(gr.sizeof_char*1, '127.0.0.1', 40134, 258, False) + self.blocks_multiply_xx_0_0 = blocks.multiply_vcc(1) + self.blocks_multiply_const_vxx_0 = blocks.multiply_const_ff(0.05) + self.blocks_complex_to_float_1 = blocks.complex_to_float(1) + self.blocks_add_xx_0 = blocks.add_vff(1) + self.audio_sink_0_0 = audio.sink(samp_rate, '', True) + self.analog_sig_source_x_0_0_0 = analog.sig_source_c(samp_rate, analog.GR_COS_WAVE, mixf, 1, 0, 0) + + + + ################################################## + # Connections + ################################################## + self.connect((self.analog_sig_source_x_0_0_0, 0), (self.blocks_multiply_xx_0_0, 1)) + self.connect((self.blocks_add_xx_0, 0), (self.blocks_multiply_const_vxx_0, 0)) + self.connect((self.blocks_complex_to_float_1, 0), (self.blocks_add_xx_0, 0)) + self.connect((self.blocks_complex_to_float_1, 1), (self.blocks_add_xx_0, 1)) + self.connect((self.blocks_multiply_const_vxx_0, 0), (self.audio_sink_0_0, 0)) + self.connect((self.blocks_multiply_xx_0_0, 0), (self.blocks_complex_to_float_1, 0)) + self.connect((self.blocks_udp_source_0, 0), (self.digital_constellation_modulator_0, 0)) + self.connect((self.digital_constellation_modulator_0, 0), (self.blocks_multiply_xx_0_0, 0)) + + + def get_resamprate(self): + return self.resamprate + + def set_resamprate(self, resamprate): + self.resamprate = resamprate + + def get_samp_rate(self): + return self.samp_rate + + def set_samp_rate(self, samp_rate): + self.samp_rate = samp_rate + self.analog_sig_source_x_0_0_0.set_sampling_freq(self.samp_rate) + + def get_qpsk__constellation(self): + return self.qpsk__constellation + + def set_qpsk__constellation(self, qpsk__constellation): + self.qpsk__constellation = qpsk__constellation + + def get_mixf(self): + return self.mixf + + def set_mixf(self, mixf): + self.mixf = mixf + self.analog_sig_source_x_0_0_0.set_frequency(self.mixf) + + + + +def argument_parser(): + description = 'requires GNU Radio 3.8xxx' + parser = ArgumentParser(description=description) + parser.add_argument( + "-r", "--resamprate", dest="resamprate", type=intx, default=20, + help="Set resamprate [default=%(default)r]") + parser.add_argument( + "-s", "--samp-rate", dest="samp_rate", type=intx, default=44100, + help="Set samp_rate [default=%(default)r]") + return parser + + +def main(top_block_cls=qpsk_tx, options=None): + if options is None: + options = argument_parser().parse_args() + tb = top_block_cls(resamprate=options.resamprate, samp_rate=options.samp_rate) + + def sig_handler(sig=None, frame=None): + tb.stop() + tb.wait() + + sys.exit(0) + + signal.signal(signal.SIGINT, sig_handler) + signal.signal(signal.SIGTERM, sig_handler) + + tb.start() + + tb.wait() + + +if __name__ == '__main__': + main() diff --git a/modem/rx_8psk.py b/modem/rx_8psk.py new file mode 100755 index 0000000..9547706 --- /dev/null +++ b/modem/rx_8psk.py @@ -0,0 +1,210 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +# +# SPDX-License-Identifier: GPL-3.0 +# +# GNU Radio Python Flow Graph +# Title: 8PSK Modem DJ0ABR +# Author: kurt +# Description: requires GNU Radio 3.8xxx +# GNU Radio version: 3.8.2.0 + +from gnuradio import analog +from gnuradio import audio +from gnuradio import blocks +from gnuradio import digital +from gnuradio import filter +from gnuradio.filter import firdes +from gnuradio import gr +import sys +import signal +from argparse import ArgumentParser +from gnuradio.eng_arg import eng_float, intx +from gnuradio import eng_notation + + +class rx_8psk(gr.top_block): + + def __init__(self, resamp=6, samp_rate=48000): + gr.top_block.__init__(self, "8PSK Modem DJ0ABR") + + ################################################## + # Parameters + ################################################## + self.resamp = resamp + self.samp_rate = samp_rate + + ################################################## + # Variables + ################################################## + self.sps = sps = 4 + self.nfilts = nfilts = 32 + self.rrc_taps = rrc_taps = firdes.root_raised_cosine(nfilts, nfilts, 1.1/float(sps), 0.2, 11*sps*nfilts) + self.outputsps = outputsps = 7 + self.mixf = mixf = 1500 + + ################################################## + # Blocks + ################################################## + self.mmse_resampler_xx_0_0 = filter.mmse_resampler_ff(0, samp_rate / 8000) + self.mmse_resampler_xx_0 = filter.mmse_resampler_cc(0, resamp) + self.low_pass_filter_0 = filter.fir_filter_fff( + 1, + firdes.low_pass( + 12, + samp_rate, + 3900, + 3300, + firdes.WIN_HAMMING, + 6.76)) + self.digital_pfb_clock_sync_xxx_0 = digital.pfb_clock_sync_ccf(sps, 0.06, rrc_taps, nfilts, nfilts/16, 2, outputsps) + self.digital_lms_dd_equalizer_cc_0 = digital.lms_dd_equalizer_cc(15, 0.01, outputsps, digital.constellation_8psk_natural().base()) + self.digital_diff_decoder_bb_0 = digital.diff_decoder_bb(8) + self.digital_costas_loop_cc_0 = digital.costas_loop_cc(0.15, 8, False) + self.digital_constellation_decoder_cb_0 = digital.constellation_decoder_cb(digital.constellation_8psk_natural().base()) + self.blocks_udp_sink_0_0_0 = blocks.udp_sink(gr.sizeof_int*1, '127.0.0.1', 40137, 120, False) + self.blocks_udp_sink_0_0 = blocks.udp_sink(gr.sizeof_int*1, '127.0.0.1', 40136, 120, False) + self.blocks_udp_sink_0 = blocks.udp_sink(gr.sizeof_char*1, '127.0.0.1', 40135, 344, False) + self.blocks_multiply_xx_0_1_0 = blocks.multiply_vff(1) + self.blocks_multiply_xx_0_1 = blocks.multiply_vff(1) + self.blocks_multiply_xx_0_0_0 = blocks.multiply_vff(1) + self.blocks_interleave_0_0 = blocks.interleave(gr.sizeof_int*1, 1) + self.blocks_interleave_0 = blocks.interleave(gr.sizeof_int*1, 1) + self.blocks_float_to_int_0_1 = blocks.float_to_int(1, 1) + self.blocks_float_to_int_0_0 = blocks.float_to_int(1, 16777216) + self.blocks_float_to_int_0 = blocks.float_to_int(1, 16777216) + self.blocks_float_to_complex_0 = blocks.float_to_complex(1) + self.blocks_complex_to_float_1 = blocks.complex_to_float(1) + self.blocks_complex_to_float_0 = blocks.complex_to_float(1) + self.audio_source_0 = audio.source(samp_rate, '', True) + self.analog_sig_source_x_0_0_0 = analog.sig_source_c(samp_rate, analog.GR_COS_WAVE, mixf, 1, 0, 0) + self.analog_const_source_x_0_1 = analog.sig_source_f(0, analog.GR_CONST_WAVE, 0, 0, 16777216) + self.analog_const_source_x_0_0 = analog.sig_source_i(0, analog.GR_CONST_WAVE, 0, 0, 1000) + self.analog_const_source_x_0 = analog.sig_source_i(0, analog.GR_CONST_WAVE, 0, 0, 1000) + self.analog_agc2_xx_0_0 = analog.agc2_cc(1e-2, 0.2, 1, 2) + self.analog_agc2_xx_0_0.set_max_gain(3) + + + + ################################################## + # Connections + ################################################## + self.connect((self.analog_agc2_xx_0_0, 0), (self.digital_costas_loop_cc_0, 0)) + self.connect((self.analog_const_source_x_0, 0), (self.blocks_interleave_0, 0)) + self.connect((self.analog_const_source_x_0_0, 0), (self.blocks_interleave_0_0, 0)) + self.connect((self.analog_const_source_x_0_1, 0), (self.blocks_multiply_xx_0_1_0, 1)) + self.connect((self.analog_sig_source_x_0_0_0, 0), (self.blocks_complex_to_float_1, 0)) + self.connect((self.audio_source_0, 0), (self.low_pass_filter_0, 0)) + self.connect((self.audio_source_0, 0), (self.mmse_resampler_xx_0_0, 0)) + self.connect((self.blocks_complex_to_float_0, 0), (self.blocks_float_to_int_0, 0)) + self.connect((self.blocks_complex_to_float_0, 1), (self.blocks_float_to_int_0_0, 0)) + self.connect((self.blocks_complex_to_float_1, 1), (self.blocks_multiply_xx_0_0_0, 1)) + self.connect((self.blocks_complex_to_float_1, 0), (self.blocks_multiply_xx_0_1, 1)) + self.connect((self.blocks_float_to_complex_0, 0), (self.mmse_resampler_xx_0, 0)) + self.connect((self.blocks_float_to_int_0, 0), (self.blocks_interleave_0_0, 1)) + self.connect((self.blocks_float_to_int_0_0, 0), (self.blocks_interleave_0_0, 2)) + self.connect((self.blocks_float_to_int_0_1, 0), (self.blocks_interleave_0, 1)) + self.connect((self.blocks_interleave_0, 0), (self.blocks_udp_sink_0_0, 0)) + self.connect((self.blocks_interleave_0_0, 0), (self.blocks_udp_sink_0_0_0, 0)) + self.connect((self.blocks_multiply_xx_0_0_0, 0), (self.blocks_float_to_complex_0, 0)) + self.connect((self.blocks_multiply_xx_0_1, 0), (self.blocks_float_to_complex_0, 1)) + self.connect((self.blocks_multiply_xx_0_1_0, 0), (self.blocks_float_to_int_0_1, 0)) + self.connect((self.digital_constellation_decoder_cb_0, 0), (self.digital_diff_decoder_bb_0, 0)) + self.connect((self.digital_costas_loop_cc_0, 0), (self.blocks_complex_to_float_0, 0)) + self.connect((self.digital_costas_loop_cc_0, 0), (self.digital_constellation_decoder_cb_0, 0)) + self.connect((self.digital_diff_decoder_bb_0, 0), (self.blocks_udp_sink_0, 0)) + self.connect((self.digital_lms_dd_equalizer_cc_0, 0), (self.analog_agc2_xx_0_0, 0)) + self.connect((self.digital_pfb_clock_sync_xxx_0, 0), (self.digital_lms_dd_equalizer_cc_0, 0)) + self.connect((self.low_pass_filter_0, 0), (self.blocks_multiply_xx_0_0_0, 0)) + self.connect((self.low_pass_filter_0, 0), (self.blocks_multiply_xx_0_1, 0)) + self.connect((self.mmse_resampler_xx_0, 0), (self.digital_pfb_clock_sync_xxx_0, 0)) + self.connect((self.mmse_resampler_xx_0_0, 0), (self.blocks_multiply_xx_0_1_0, 0)) + + + def get_resamp(self): + return self.resamp + + def set_resamp(self, resamp): + self.resamp = resamp + self.mmse_resampler_xx_0.set_resamp_ratio(self.resamp) + + def get_samp_rate(self): + return self.samp_rate + + def set_samp_rate(self, samp_rate): + self.samp_rate = samp_rate + self.analog_sig_source_x_0_0_0.set_sampling_freq(self.samp_rate) + self.low_pass_filter_0.set_taps(firdes.low_pass(12, self.samp_rate, 3900, 3300, firdes.WIN_HAMMING, 6.76)) + self.mmse_resampler_xx_0_0.set_resamp_ratio(self.samp_rate / 8000) + + def get_sps(self): + return self.sps + + def set_sps(self, sps): + self.sps = sps + self.set_rrc_taps(firdes.root_raised_cosine(self.nfilts, self.nfilts, 1.1/float(self.sps), 0.2, 11*self.sps*self.nfilts)) + + def get_nfilts(self): + return self.nfilts + + def set_nfilts(self, nfilts): + self.nfilts = nfilts + self.set_rrc_taps(firdes.root_raised_cosine(self.nfilts, self.nfilts, 1.1/float(self.sps), 0.2, 11*self.sps*self.nfilts)) + + def get_rrc_taps(self): + return self.rrc_taps + + def set_rrc_taps(self, rrc_taps): + self.rrc_taps = rrc_taps + self.digital_pfb_clock_sync_xxx_0.update_taps(self.rrc_taps) + + def get_outputsps(self): + return self.outputsps + + def set_outputsps(self, outputsps): + self.outputsps = outputsps + + def get_mixf(self): + return self.mixf + + def set_mixf(self, mixf): + self.mixf = mixf + self.analog_sig_source_x_0_0_0.set_frequency(self.mixf) + + + + +def argument_parser(): + description = 'requires GNU Radio 3.8xxx' + parser = ArgumentParser(description=description) + parser.add_argument( + "-r", "--resamp", dest="resamp", type=intx, default=6, + help="Set resamp [default=%(default)r]") + parser.add_argument( + "-s", "--samp-rate", dest="samp_rate", type=intx, default=48000, + help="Set samp_rate [default=%(default)r]") + return parser + + +def main(top_block_cls=rx_8psk, options=None): + if options is None: + options = argument_parser().parse_args() + tb = top_block_cls(resamp=options.resamp, samp_rate=options.samp_rate) + + def sig_handler(sig=None, frame=None): + tb.stop() + tb.wait() + + sys.exit(0) + + signal.signal(signal.SIGINT, sig_handler) + signal.signal(signal.SIGTERM, sig_handler) + + tb.start() + + tb.wait() + + +if __name__ == '__main__': + main() diff --git a/modem/scrambler.c b/modem/scrambler.c new file mode 100644 index 0000000..8fd1e33 --- /dev/null +++ b/modem/scrambler.c @@ -0,0 +1,91 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +#include "qo100modem.h" + +uint8_t scr[400] = { +130 , 239 , 223 , 19 , 146 , 254 , 12 , 86 , 106 , 68 , +77 , 213 , 243 , 216 , 102 , 227 , 108 , 113 , 229 , 89 , +26 , 64 , 138 , 216 , 225 , 121 , 194 , 137 , 152 , 64 , +51 , 175 , 68 , 200 , 37 , 104 , 247 , 68 , 193 , 50 , +19 , 14 , 196 , 81 , 4 , 236 , 191 , 249 , 83 , 25 , +161 , 171 , 167 , 29 , 33 , 139 , 7 , 152 , 230 , 144 , +125 , 206 , 34 , 236 , 112 , 78 , 219 , 34 , 181 , 161 , +7 , 45 , 198 , 235 , 62 , 115 , 194 , 100 , 209 , 95 , +186 , 161 , 53 , 10 , 110 , 246 , 122 , 246 , 207 , 194 , +178 , 63 , 232 , 93 , 158 , 234 , 231 , 73 , 214 , 64, +130 , 239 , 223 , 19 , 146 , 254 , 12 , 86 , 106 , 68 , +77 , 213 , 243 , 216 , 102 , 227 , 108 , 113 , 229 , 89 , +26 , 64 , 138 , 216 , 225 , 121 , 194 , 137 , 152 , 64 , +51 , 175 , 68 , 200 , 37 , 104 , 247 , 68 , 193 , 50 , +19 , 14 , 196 , 81 , 4 , 236 , 191 , 249 , 83 , 25 , +161 , 171 , 167 , 29 , 33 , 139 , 7 , 152 , 230 , 144 , +125 , 206 , 34 , 236 , 112 , 78 , 219 , 34 , 181 , 161 , +7 , 45 , 198 , 235 , 62 , 115 , 194 , 100 , 209 , 95 , +186 , 161 , 53 , 10 , 110 , 246 , 122 , 246 , 207 , 194 , +178 , 63 , 232 , 93 , 158 , 234 , 231 , 73 , 214 , 64, +130 , 239 , 223 , 19 , 146 , 254 , 12 , 86 , 106 , 68 , +77 , 213 , 243 , 216 , 102 , 227 , 108 , 113 , 229 , 89 , +26 , 64 , 138 , 216 , 225 , 121 , 194 , 137 , 152 , 64 , +51 , 175 , 68 , 200 , 37 , 104 , 247 , 68 , 193 , 50 , +19 , 14 , 196 , 81 , 4 , 236 , 191 , 249 , 83 , 25 , +161 , 171 , 167 , 29 , 33 , 139 , 7 , 152 , 230 , 144 , +125 , 206 , 34 , 236 , 112 , 78 , 219 , 34 , 181 , 161 , +7 , 45 , 198 , 235 , 62 , 115 , 194 , 100 , 209 , 95 , +186 , 161 , 53 , 10 , 110 , 246 , 122 , 246 , 207 , 194 , +178 , 63 , 232 , 93 , 158 , 234 , 231 , 73 , 214 , 64, +130 , 239 , 223 , 19 , 146 , 254 , 12 , 86 , 106 , 68 , +77 , 213 , 243 , 216 , 102 , 227 , 108 , 113 , 229 , 89 , +26 , 64 , 138 , 216 , 225 , 121 , 194 , 137 , 152 , 64 , +51 , 175 , 68 , 200 , 37 , 104 , 247 , 68 , 193 , 50 , +19 , 14 , 196 , 81 , 4 , 236 , 191 , 249 , 83 , 25 , +161 , 171 , 167 , 29 , 33 , 139 , 7 , 152 , 230 , 144 , +125 , 206 , 34 , 236 , 112 , 78 , 219 , 34 , 181 , 161 , +7 , 45 , 198 , 235 , 62 , 115 , 194 , 100 , 209 , 95 , +186 , 161 , 53 , 10 , 110 , 246 , 122 , 246 , 207 , 194 , +178 , 63 , 232 , 93 , 158 , 234 , 231 , 73 , 214 , 64 +}; + +uint8_t rx_scrbuf[400]; + +void TX_Scramble(uint8_t *data, int len) +{ + if (len > 400) return; + + for(int i=0; i 400) return data; + + memcpy(rx_scrbuf,data,len); + + for(int i=0; i= MAXUDPTHREADS) + { + printf("max number of UDP threads\n"); + exit(0); + } + + rxcfg[rxcfg_idx].sock = sock; + rxcfg[rxcfg_idx].port = port; + rxcfg[rxcfg_idx].rxfunc = rxfunc; + rxcfg[rxcfg_idx].keeprunning = keeprunning; + + // bind port + struct sockaddr_in sin; + + *sock = socket(PF_INET, SOCK_DGRAM, 0); + if (*sock == -1){ + printf("Failed to create Socket\n"); + exit(0); + } + + int enable = 1; + setsockopt(*sock, SOL_SOCKET, SO_REUSEADDR, &enable, sizeof(int)); + + memset(&sin, 0, sizeof(struct sockaddr_in)); + sin.sin_family = AF_INET; + sin.sin_port = htons(port); + sin.sin_addr.s_addr = INADDR_ANY; + + if (bind(*sock, (struct sockaddr *)&sin, sizeof(struct sockaddr_in)) != 0) + { + printf("Failed to bind socket, port:%d\n",port); + close(*sock); + exit(0); + } + + // port sucessfully bound + // create the receive thread + pthread_t rxthread; + pthread_create(&rxthread, NULL, threadfunction, &(rxcfg[rxcfg_idx])); + + rxcfg_idx++; +} + +void *threadfunction(void *param) +{ + RXCFG rxcfg; + memcpy((uint8_t *)(&rxcfg), (uint8_t *)param, sizeof(RXCFG)); + + socklen_t fromlen; + int recvlen; + char rxbuf[256]; + struct sockaddr_in fromSock; + + fromlen = sizeof(struct sockaddr_in); + while(*rxcfg.keeprunning) + { + recvlen = recvfrom(*rxcfg.sock, rxbuf, 256, 0, (struct sockaddr *)&fromSock, &fromlen); + if (recvlen > 0) + { + // data received, send it to callback function + (*rxcfg.rxfunc)((uint8_t *)rxbuf,recvlen, &fromSock); + } + } + + return NULL; +} + +// send UDP message +void sendUDP(char *destIP, int destPort, uint8_t *pdata, int len) +{ + int sockfd; + struct sockaddr_in servaddr; + + // Creating socket file descriptor + if ( (sockfd = socket(AF_INET, SOCK_DGRAM, 0)) < 0 ) { + printf("sendUDP: socket creation failed\n"); + exit(0); + } + + memset(&servaddr, 0, sizeof(servaddr)); + + // Filling server information + servaddr.sin_family = AF_INET; + servaddr.sin_port = htons(destPort); + //printf("Send to <%s><%d> Len:%d\n",destIP,destPort,len); + servaddr.sin_addr.s_addr=inet_addr(destIP); + + sendto(sockfd, (char *)pdata, len, 0, (const struct sockaddr *) &servaddr, sizeof(servaddr)); + close(sockfd); +} + diff --git a/modem/udp.h b/modem/udp.h new file mode 100644 index 0000000..33163fe --- /dev/null +++ b/modem/udp.h @@ -0,0 +1,9 @@ +void UdpRxInit(int *sock, int port, void (*rxfunc)(uint8_t *, int, struct sockaddr_in*), int *keeprunning); +void sendUDP(char *destIP, int destPort, uint8_t *pdata, int len); + +typedef struct { + int *sock; + int port; + void (*rxfunc)(uint8_t *, int, struct sockaddr_in*); + int *keeprunning; +} RXCFG; diff --git a/oscardata/.vs/VSWorkspaceState.json b/oscardata/.vs/VSWorkspaceState.json new file mode 100755 index 0000000..e092b2e --- /dev/null +++ b/oscardata/.vs/VSWorkspaceState.json @@ -0,0 +1,11 @@ +{ + "ExpandedNodes": [ + "", + "\\oscardata", + "\\oscardata\\bin", + "\\oscardata\\obj", + "\\packages" + ], + "SelectedNode": "\\packages", + "PreviewInSolutionExplorer": false +} \ No newline at end of file diff --git a/oscardata/.vs/oscardata/v15/.suo b/oscardata/.vs/oscardata/v15/.suo new file mode 100755 index 0000000..c6adffa Binary files /dev/null and b/oscardata/.vs/oscardata/v15/.suo differ diff --git a/oscardata/.vs/oscardata/v15/Server/sqlite3/db.lock b/oscardata/.vs/oscardata/v15/Server/sqlite3/db.lock new file mode 100755 index 0000000..e69de29 diff --git a/oscardata/.vs/oscardata/v15/Server/sqlite3/storage.ide b/oscardata/.vs/oscardata/v15/Server/sqlite3/storage.ide new file mode 100755 index 0000000..0bd9e81 Binary files /dev/null and b/oscardata/.vs/oscardata/v15/Server/sqlite3/storage.ide differ diff --git a/oscardata/.vs/oscardata/v16/.suo b/oscardata/.vs/oscardata/v16/.suo new file mode 100755 index 0000000..17913e6 Binary files /dev/null and b/oscardata/.vs/oscardata/v16/.suo differ diff --git a/oscardata/.vs/oscardata/v16/Server/sqlite3/db.lock b/oscardata/.vs/oscardata/v16/Server/sqlite3/db.lock new file mode 100755 index 0000000..e69de29 diff --git a/oscardata/.vs/oscardata/v16/Server/sqlite3/storage.ide b/oscardata/.vs/oscardata/v16/Server/sqlite3/storage.ide new file mode 100755 index 0000000..03114fd Binary files /dev/null and b/oscardata/.vs/oscardata/v16/Server/sqlite3/storage.ide differ diff --git a/oscardata/.vs/slnx.sqlite b/oscardata/.vs/slnx.sqlite new file mode 100755 index 0000000..30bf70f Binary files /dev/null and b/oscardata/.vs/slnx.sqlite differ diff --git a/oscardata/oscardata.sln b/oscardata/oscardata.sln new file mode 100755 index 0000000..66105cd --- /dev/null +++ b/oscardata/oscardata.sln @@ -0,0 +1,25 @@ + +Microsoft Visual Studio Solution File, Format Version 12.00 +# Visual Studio 15 +VisualStudioVersion = 15.0.27130.2010 +MinimumVisualStudioVersion = 10.0.40219.1 +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "oscardata", "oscardata\oscardata.csproj", "{989BF5C6-36F6-4158-9FB2-42E86D2020DB}" +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + Debug|Any CPU = Debug|Any CPU + Release|Any CPU = Release|Any CPU + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {989BF5C6-36F6-4158-9FB2-42E86D2020DB}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {989BF5C6-36F6-4158-9FB2-42E86D2020DB}.Debug|Any CPU.Build.0 = Debug|Any CPU + {989BF5C6-36F6-4158-9FB2-42E86D2020DB}.Release|Any CPU.ActiveCfg = Release|Any CPU + {989BF5C6-36F6-4158-9FB2-42E86D2020DB}.Release|Any CPU.Build.0 = Release|Any CPU + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection + GlobalSection(ExtensibilityGlobals) = postSolution + SolutionGuid = {BCA5060C-33D6-4062-A3AF-6F304E7BFD89} + EndGlobalSection +EndGlobal diff --git a/oscardata/oscardata/App.config b/oscardata/oscardata/App.config new file mode 100755 index 0000000..e743be0 --- /dev/null +++ b/oscardata/oscardata/App.config @@ -0,0 +1,6 @@ + + + + + + diff --git a/oscardata/oscardata/ArraySend.cs b/oscardata/oscardata/ArraySend.cs new file mode 100755 index 0000000..78cd33a --- /dev/null +++ b/oscardata/oscardata/ArraySend.cs @@ -0,0 +1,204 @@ +using System; +using System.Runtime.InteropServices; +using System.Threading; + +// Input: Byte Array +// Action: sends this byte array to the modem + +namespace oscardata +{ + public static class ArraySend + { + static Timer TTimer; + static Byte[] txdata; + static int txlen; + public static int txpos; + static Byte txtype; + static bool sending = false; + public static Byte filestat = statics.noTX; + static private readonly object busyLock = new object(); + static int timeout_period_ms = 10; + + // start a timer which is used to send a file from txdata + public static void ArraySendInit() + { + TTimer = new Timer(new TimerCallback(TimerTick), 0, 0, timeout_period_ms); + } + + static void setSending(bool v) + { + lock(busyLock) + { + sending = v; + if (v == false) + filestat = statics.LastFrame; + } + } + public static bool getSending() + { + bool v; + + lock (busyLock) + { + v = sending; + } + return v; + } + + /* + * start sending a file + * data ... contents of the file in a Byte array + * type ... type of the file (see statics) + * filename ... description of the file or its name (payload length max) + */ + public static bool Send(Byte[] data, Byte type, String filename, String RealFileName) + { + // check if already sending + if (getSending()) return false; + + txtype = type; + txpos = 0; + filestat = statics.FirstFrame; + // add a file header and copy to txdata for transmission + AddHeader(data,filename, RealFileName); + + // marker, we are sending + txlen = txdata.Length; + setSending(true); + + return true; + } + + public static void stopSending() + { + setSending(false); + } + + static void AddHeader(Byte[] data, String filename, String realname) + { + long filesize = data.Length;// statics.GetFileSize(filename); + + Byte[] fnarr = statics.StringToByteArray(realname); + Crc c = new Crc(); + UInt16 fncrc = c.crc16_messagecalc(fnarr, fnarr.Length); + + // create the file header + // 50 bytes ... Filename (or first 50 chars of the filename) + // 2 bytes .... CRC16 od the filename, this is used as a file ID + // 3 bytes .... size of file + Byte[] header = new Byte[55]; + + int len = fnarr.Length; + if (len > 50) len = 50; + Array.Copy(fnarr, header, len); + header[50] = (Byte)((fncrc >> 8)&0xff); + header[51] = (Byte)(fncrc&0xff); + + header[52] = (Byte)((filesize >> 16) & 0xff); + header[53] = (Byte)((filesize >> 8) & 0xff); + header[54] = (Byte)(filesize & 0xff); + + txdata = new Byte[data.Length + header.Length]; + Array.Copy(header, txdata, header.Length); + Array.Copy(data, 0, txdata, header.Length, data.Length); + } + + public static String rxFilename; + public static int FileID; + public static int FileSize; + public static Byte[] GetAndRemoveHeader(Byte[] data) + { + try + { + Byte[] fnarr = new byte[50]; + Array.Copy(data, fnarr, 50); + rxFilename = statics.ByteArrayToString(fnarr); + + FileID = data[50]; + FileID <<= 8; + FileID += data[51]; + + FileSize = data[52]; + FileSize <<= 8; + FileSize += data[53]; + FileSize <<= 8; + FileSize += data[54]; + + Byte[] f = new byte[data.Length - 55]; + Array.Copy(data, 55, f, 0, data.Length - 55); + return f; + } + catch { } + return null; + } + + // runs every 10 ms + static void TimerTick(object stateInfo) + { + // check if we need to send something + if (getSending() == false) return; // nothing to send + + // check the TX buffer, do not feed more data into + // the buffer if it has already more than 10 entries + if (Udp.GetBufferCount() > 3) return; + + Byte[] txarr = new byte[statics.PayloadLen]; + + // check if txdata is smaller or equal one payload + if (filestat == statics.FirstFrame) + { + // send the first frame + if (txlen <= statics.PayloadLen) + { + // we just need to send one frame + txudp(txdata, txtype, statics.LastFrame); + setSending(false); // transmission complete + } + else + { + // additional frame follow + // from txdata send one chunk of length statics.PayloadLen + Array.Copy(txdata, 0, txarr, 0, statics.PayloadLen); + txudp(txarr, txtype, statics.FirstFrame); + txpos = statics.PayloadLen; + filestat = statics.NextFrame; + } + return; + } + + if (filestat == statics.NextFrame) + { + // check if this is the last frame + int restlen = txlen - txpos; + if(restlen <= statics.PayloadLen) + { + // send as the last frame + Array.Copy(txdata, txpos, txarr, 0, restlen); // unused byte will be 0 + txudp(txarr, txtype, statics.LastFrame); + txudp(txarr, txtype, statics.LastFrame); + setSending(false); // transmission complete + } + else + { + // additional frame follows + // from txdata send one chunk of length statics.PayloadLen + Array.Copy(txdata, txpos, txarr, 0, statics.PayloadLen); + txudp(txarr, txtype, statics.NextFrame); + txpos += statics.PayloadLen; + } + return; + } + } + + static void txudp(Byte[] txdata, Byte txtype, Byte filestat) + { + // add the tytype and filestatus at the beginning + Byte[] darr = new byte[statics.PayloadLen + 2]; + darr[0] = txtype; + darr[1] = filestat; + Array.Copy(txdata, 0, darr, 2, statics.PayloadLen); + Udp.UdpSend(darr); + // Console.WriteLine("TX filestat: " + filestat+ " data:" + darr[2].ToString("X2") + " " + darr[3].ToString("X2")); + } + } +} diff --git a/oscardata/oscardata/Form1.Designer.cs b/oscardata/oscardata/Form1.Designer.cs new file mode 100755 index 0000000..dc2eafb --- /dev/null +++ b/oscardata/oscardata/Form1.Designer.cs @@ -0,0 +1,700 @@ +namespace oscardata +{ + partial class Form1 + { + /// + /// Erforderliche Designervariable. + /// + private System.ComponentModel.IContainer components = null; + + /// + /// Verwendete Ressourcen bereinigen. + /// + /// True, wenn verwaltete Ressourcen gelöscht werden sollen; andernfalls False. + protected override void Dispose(bool disposing) + { + if (disposing && (components != null)) + { + components.Dispose(); + } + base.Dispose(disposing); + } + + #region Vom Windows Form-Designer generierter Code + + /// + /// Erforderliche Methode für die Designerunterstützung. + /// Der Inhalt der Methode darf nicht mit dem Code-Editor geändert werden. + /// + private void InitializeComponent() + { + this.components = new System.ComponentModel.Container(); + System.ComponentModel.ComponentResourceManager resources = new System.ComponentModel.ComponentResourceManager(typeof(Form1)); + this.timer_udpTX = new System.Windows.Forms.Timer(this.components); + this.timer_udprx = new System.Windows.Forms.Timer(this.components); + this.statusStrip1 = new System.Windows.Forms.StatusStrip(); + this.toolStripStatusLabel = new System.Windows.Forms.ToolStripStatusLabel(); + this.ts_ip = new System.Windows.Forms.ToolStripStatusLabel(); + this.RXstatus = new System.Windows.Forms.ToolStripStatusLabel(); + this.panel_constel = new System.Windows.Forms.Panel(); + this.timer_qpsk = new System.Windows.Forms.Timer(this.components); + this.panel_txspectrum = new System.Windows.Forms.Panel(); + this.tabPage1 = new System.Windows.Forms.TabPage(); + this.button_stopBERtest = new System.Windows.Forms.Button(); + this.button_startBERtest = new System.Windows.Forms.Button(); + this.rtb = new System.Windows.Forms.RichTextBox(); + this.tabPage2 = new System.Windows.Forms.TabPage(); + this.groupBox1 = new System.Windows.Forms.Panel(); + this.label_nextimage = new System.Windows.Forms.Label(); + this.cb_loop = new System.Windows.Forms.CheckBox(); + this.bt_rximages = new System.Windows.Forms.Button(); + this.button_loadimage = new System.Windows.Forms.Button(); + this.comboBox_quality = new System.Windows.Forms.ComboBox(); + this.label2 = new System.Windows.Forms.Label(); + this.checkBox_big = new System.Windows.Forms.CheckBox(); + this.button_cancelimg = new System.Windows.Forms.Button(); + this.button_sendimage = new System.Windows.Forms.Button(); + this.label_rximage = new System.Windows.Forms.Label(); + this.label_tximage = new System.Windows.Forms.Label(); + this.pictureBox_rximage = new System.Windows.Forms.PictureBox(); + this.pictureBox_tximage = new System.Windows.Forms.PictureBox(); + this.tabControl1 = new System.Windows.Forms.TabControl(); + this.tabPage3 = new System.Windows.Forms.TabPage(); + this.button2 = new System.Windows.Forms.Button(); + this.bt_openrxfile = new System.Windows.Forms.Button(); + this.label_rxfile = new System.Windows.Forms.Label(); + this.label_txfile = new System.Windows.Forms.Label(); + this.rtb_RXfile = new System.Windows.Forms.RichTextBox(); + this.rtb_TXfile = new System.Windows.Forms.RichTextBox(); + this.bt_file_send = new System.Windows.Forms.Button(); + this.bt_sendBinaryFile = new System.Windows.Forms.Button(); + this.bt_file_html = new System.Windows.Forms.Button(); + this.bt_file_ascii = new System.Windows.Forms.Button(); + this.tabPage5 = new System.Windows.Forms.TabPage(); + this.textBox1 = new System.Windows.Forms.TextBox(); + this.bt_shutdown = new System.Windows.Forms.Button(); + this.cb_savegoodfiles = new System.Windows.Forms.CheckBox(); + this.cb_stampcall = new System.Windows.Forms.CheckBox(); + this.tb_callsign = new System.Windows.Forms.TextBox(); + this.label1 = new System.Windows.Forms.Label(); + this.cb_speed = new System.Windows.Forms.ComboBox(); + this.label_speed = new System.Windows.Forms.Label(); + this.timer_searchmodem = new System.Windows.Forms.Timer(this.components); + this.statusStrip1.SuspendLayout(); + this.tabPage1.SuspendLayout(); + this.tabPage2.SuspendLayout(); + this.groupBox1.SuspendLayout(); + ((System.ComponentModel.ISupportInitialize)(this.pictureBox_rximage)).BeginInit(); + ((System.ComponentModel.ISupportInitialize)(this.pictureBox_tximage)).BeginInit(); + this.tabControl1.SuspendLayout(); + this.tabPage3.SuspendLayout(); + this.tabPage5.SuspendLayout(); + this.SuspendLayout(); + // + // timer_udpTX + // + this.timer_udpTX.Tick += new System.EventHandler(this.timer1_Tick); + // + // timer_udprx + // + this.timer_udprx.Tick += new System.EventHandler(this.timer_udprx_Tick); + // + // statusStrip1 + // + this.statusStrip1.ImageScalingSize = new System.Drawing.Size(20, 20); + this.statusStrip1.Items.AddRange(new System.Windows.Forms.ToolStripItem[] { + this.toolStripStatusLabel, + this.ts_ip, + this.RXstatus}); + this.statusStrip1.Location = new System.Drawing.Point(0, 669); + this.statusStrip1.Name = "statusStrip1"; + this.statusStrip1.Size = new System.Drawing.Size(1304, 22); + this.statusStrip1.TabIndex = 4; + this.statusStrip1.Text = "statusStrip1"; + // + // toolStripStatusLabel + // + this.toolStripStatusLabel.Name = "toolStripStatusLabel"; + this.toolStripStatusLabel.Size = new System.Drawing.Size(39, 17); + this.toolStripStatusLabel.Text = "Status"; + // + // ts_ip + // + this.ts_ip.Name = "ts_ip"; + this.ts_ip.Size = new System.Drawing.Size(12, 17); + this.ts_ip.Text = "?"; + // + // RXstatus + // + this.RXstatus.Name = "RXstatus"; + this.RXstatus.Size = new System.Drawing.Size(58, 17); + this.RXstatus.Text = "RX-Status"; + // + // panel_constel + // + this.panel_constel.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(255)))), ((int)(((byte)(255)))), ((int)(((byte)(220))))); + this.panel_constel.Location = new System.Drawing.Point(11, 590); + this.panel_constel.Name = "panel_constel"; + this.panel_constel.Size = new System.Drawing.Size(75, 75); + this.panel_constel.TabIndex = 5; + this.panel_constel.Paint += new System.Windows.Forms.PaintEventHandler(this.panel_constel_Paint); + // + // timer_qpsk + // + this.timer_qpsk.Enabled = true; + this.timer_qpsk.Interval = 500; + this.timer_qpsk.Tick += new System.EventHandler(this.timer_qpsk_Tick); + // + // panel_txspectrum + // + this.panel_txspectrum.BackColor = System.Drawing.SystemColors.ControlLight; + this.panel_txspectrum.Location = new System.Drawing.Point(92, 590); + this.panel_txspectrum.Name = "panel_txspectrum"; + this.panel_txspectrum.Size = new System.Drawing.Size(441, 76); + this.panel_txspectrum.TabIndex = 6; + this.panel_txspectrum.Paint += new System.Windows.Forms.PaintEventHandler(this.panel_txspectrum_Paint); + // + // tabPage1 + // + this.tabPage1.Controls.Add(this.button_stopBERtest); + this.tabPage1.Controls.Add(this.button_startBERtest); + this.tabPage1.Controls.Add(this.rtb); + this.tabPage1.Location = new System.Drawing.Point(4, 22); + this.tabPage1.Name = "tabPage1"; + this.tabPage1.Padding = new System.Windows.Forms.Padding(3); + this.tabPage1.Size = new System.Drawing.Size(1291, 553); + this.tabPage1.TabIndex = 0; + this.tabPage1.Text = "BER Test"; + this.tabPage1.UseVisualStyleBackColor = true; + // + // button_stopBERtest + // + this.button_stopBERtest.Font = new System.Drawing.Font("Microsoft Sans Serif", 10F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.button_stopBERtest.Location = new System.Drawing.Point(113, 13); + this.button_stopBERtest.Name = "button_stopBERtest"; + this.button_stopBERtest.Size = new System.Drawing.Size(101, 32); + this.button_stopBERtest.TabIndex = 4; + this.button_stopBERtest.Text = "STOP"; + this.button_stopBERtest.UseVisualStyleBackColor = true; + this.button_stopBERtest.Click += new System.EventHandler(this.button_stopBERtest_Click); + // + // button_startBERtest + // + this.button_startBERtest.Font = new System.Drawing.Font("Microsoft Sans Serif", 10F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.button_startBERtest.Location = new System.Drawing.Point(6, 13); + this.button_startBERtest.Name = "button_startBERtest"; + this.button_startBERtest.Size = new System.Drawing.Size(101, 32); + this.button_startBERtest.TabIndex = 3; + this.button_startBERtest.Text = "START"; + this.button_startBERtest.UseVisualStyleBackColor = true; + this.button_startBERtest.Click += new System.EventHandler(this.button_startBERtest_Click); + // + // rtb + // + this.rtb.Font = new System.Drawing.Font("Courier New", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.rtb.Location = new System.Drawing.Point(6, 51); + this.rtb.Name = "rtb"; + this.rtb.Size = new System.Drawing.Size(1270, 494); + this.rtb.TabIndex = 0; + this.rtb.Text = ""; + // + // tabPage2 + // + this.tabPage2.Controls.Add(this.groupBox1); + this.tabPage2.Controls.Add(this.label_rximage); + this.tabPage2.Controls.Add(this.label_tximage); + this.tabPage2.Controls.Add(this.pictureBox_rximage); + this.tabPage2.Controls.Add(this.pictureBox_tximage); + this.tabPage2.Location = new System.Drawing.Point(4, 22); + this.tabPage2.Name = "tabPage2"; + this.tabPage2.Padding = new System.Windows.Forms.Padding(3); + this.tabPage2.Size = new System.Drawing.Size(1291, 553); + this.tabPage2.TabIndex = 1; + this.tabPage2.Text = "Image"; + this.tabPage2.UseVisualStyleBackColor = true; + // + // groupBox1 + // + this.groupBox1.Controls.Add(this.label_nextimage); + this.groupBox1.Controls.Add(this.cb_loop); + this.groupBox1.Controls.Add(this.bt_rximages); + this.groupBox1.Controls.Add(this.button_loadimage); + this.groupBox1.Controls.Add(this.comboBox_quality); + this.groupBox1.Controls.Add(this.label2); + this.groupBox1.Controls.Add(this.checkBox_big); + this.groupBox1.Controls.Add(this.button_cancelimg); + this.groupBox1.Controls.Add(this.button_sendimage); + this.groupBox1.Location = new System.Drawing.Point(3, 508); + this.groupBox1.Name = "groupBox1"; + this.groupBox1.Size = new System.Drawing.Size(1277, 42); + this.groupBox1.TabIndex = 12; + // + // label_nextimage + // + this.label_nextimage.AutoSize = true; + this.label_nextimage.Location = new System.Drawing.Point(618, 19); + this.label_nextimage.Name = "label_nextimage"; + this.label_nextimage.Size = new System.Drawing.Size(81, 13); + this.label_nextimage.TabIndex = 12; + this.label_nextimage.Text = "next image in ..."; + // + // cb_loop + // + this.cb_loop.AutoSize = true; + this.cb_loop.Location = new System.Drawing.Point(621, 5); + this.cb_loop.Name = "cb_loop"; + this.cb_loop.Size = new System.Drawing.Size(167, 17); + this.cb_loop.TabIndex = 11; + this.cb_loop.Text = "loop (send all images in folder)"; + this.cb_loop.UseVisualStyleBackColor = true; + // + // bt_rximages + // + this.bt_rximages.Location = new System.Drawing.Point(534, 5); + this.bt_rximages.Name = "bt_rximages"; + this.bt_rximages.Size = new System.Drawing.Size(75, 23); + this.bt_rximages.TabIndex = 10; + this.bt_rximages.Text = "RX Images"; + this.bt_rximages.UseVisualStyleBackColor = true; + this.bt_rximages.Click += new System.EventHandler(this.bt_rximages_Click); + // + // button_loadimage + // + this.button_loadimage.Location = new System.Drawing.Point(265, 5); + this.button_loadimage.Name = "button_loadimage"; + this.button_loadimage.Size = new System.Drawing.Size(75, 23); + this.button_loadimage.TabIndex = 0; + this.button_loadimage.Text = "Load Image"; + this.button_loadimage.UseVisualStyleBackColor = true; + this.button_loadimage.Click += new System.EventHandler(this.button_loadimage_Click); + // + // comboBox_quality + // + this.comboBox_quality.FormattingEnabled = true; + this.comboBox_quality.Items.AddRange(new object[] { + "low, 30s", + "medium, 1min", + "high, 2min", + "very high, 4min"}); + this.comboBox_quality.Location = new System.Drawing.Point(57, 7); + this.comboBox_quality.Name = "comboBox_quality"; + this.comboBox_quality.Size = new System.Drawing.Size(109, 21); + this.comboBox_quality.TabIndex = 6; + this.comboBox_quality.Text = "medium, 1min"; + // + // label2 + // + this.label2.AutoSize = true; + this.label2.Location = new System.Drawing.Point(8, 10); + this.label2.Name = "label2"; + this.label2.Size = new System.Drawing.Size(42, 13); + this.label2.TabIndex = 7; + this.label2.Text = "Quality:"; + // + // checkBox_big + // + this.checkBox_big.AutoSize = true; + this.checkBox_big.Checked = true; + this.checkBox_big.CheckState = System.Windows.Forms.CheckState.Checked; + this.checkBox_big.Location = new System.Drawing.Point(187, 9); + this.checkBox_big.Name = "checkBox_big"; + this.checkBox_big.Size = new System.Drawing.Size(75, 17); + this.checkBox_big.TabIndex = 8; + this.checkBox_big.Text = "big picture"; + this.checkBox_big.UseVisualStyleBackColor = true; + this.checkBox_big.CheckedChanged += new System.EventHandler(this.checkBox_small_CheckedChanged); + // + // button_cancelimg + // + this.button_cancelimg.Location = new System.Drawing.Point(443, 5); + this.button_cancelimg.Name = "button_cancelimg"; + this.button_cancelimg.Size = new System.Drawing.Size(75, 23); + this.button_cancelimg.TabIndex = 9; + this.button_cancelimg.Text = "Cancel"; + this.button_cancelimg.UseVisualStyleBackColor = true; + this.button_cancelimg.Click += new System.EventHandler(this.button_cancelimg_Click); + // + // button_sendimage + // + this.button_sendimage.Location = new System.Drawing.Point(346, 5); + this.button_sendimage.Name = "button_sendimage"; + this.button_sendimage.Size = new System.Drawing.Size(75, 23); + this.button_sendimage.TabIndex = 1; + this.button_sendimage.Text = "Send Image"; + this.button_sendimage.UseVisualStyleBackColor = true; + this.button_sendimage.Click += new System.EventHandler(this.button_sendimage_Click); + // + // label_rximage + // + this.label_rximage.AutoSize = true; + this.label_rximage.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.label_rximage.Location = new System.Drawing.Point(648, 7); + this.label_rximage.Name = "label_rximage"; + this.label_rximage.Size = new System.Drawing.Size(61, 13); + this.label_rximage.TabIndex = 5; + this.label_rximage.Text = "RX image"; + // + // label_tximage + // + this.label_tximage.AutoSize = true; + this.label_tximage.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.label_tximage.Location = new System.Drawing.Point(6, 7); + this.label_tximage.Name = "label_tximage"; + this.label_tximage.Size = new System.Drawing.Size(60, 13); + this.label_tximage.TabIndex = 4; + this.label_tximage.Text = "TX image"; + // + // pictureBox_rximage + // + this.pictureBox_rximage.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(240)))), ((int)(((byte)(250)))), ((int)(((byte)(240))))); + this.pictureBox_rximage.BackgroundImageLayout = System.Windows.Forms.ImageLayout.None; + this.pictureBox_rximage.Location = new System.Drawing.Point(642, 27); + this.pictureBox_rximage.Name = "pictureBox_rximage"; + this.pictureBox_rximage.Size = new System.Drawing.Size(640, 480); + this.pictureBox_rximage.SizeMode = System.Windows.Forms.PictureBoxSizeMode.AutoSize; + this.pictureBox_rximage.TabIndex = 3; + this.pictureBox_rximage.TabStop = false; + // + // pictureBox_tximage + // + this.pictureBox_tximage.BackColor = System.Drawing.Color.FromArgb(((int)(((byte)(250)))), ((int)(((byte)(250)))), ((int)(((byte)(240))))); + this.pictureBox_tximage.BackgroundImageLayout = System.Windows.Forms.ImageLayout.None; + this.pictureBox_tximage.Location = new System.Drawing.Point(1, 27); + this.pictureBox_tximage.Name = "pictureBox_tximage"; + this.pictureBox_tximage.Size = new System.Drawing.Size(640, 480); + this.pictureBox_tximage.TabIndex = 2; + this.pictureBox_tximage.TabStop = false; + // + // tabControl1 + // + this.tabControl1.Controls.Add(this.tabPage2); + this.tabControl1.Controls.Add(this.tabPage3); + this.tabControl1.Controls.Add(this.tabPage1); + this.tabControl1.Controls.Add(this.tabPage5); + this.tabControl1.Location = new System.Drawing.Point(5, 3); + this.tabControl1.Name = "tabControl1"; + this.tabControl1.SelectedIndex = 0; + this.tabControl1.Size = new System.Drawing.Size(1299, 579); + this.tabControl1.TabIndex = 3; + // + // tabPage3 + // + this.tabPage3.Controls.Add(this.button2); + this.tabPage3.Controls.Add(this.bt_openrxfile); + this.tabPage3.Controls.Add(this.label_rxfile); + this.tabPage3.Controls.Add(this.label_txfile); + this.tabPage3.Controls.Add(this.rtb_RXfile); + this.tabPage3.Controls.Add(this.rtb_TXfile); + this.tabPage3.Controls.Add(this.bt_file_send); + this.tabPage3.Controls.Add(this.bt_sendBinaryFile); + this.tabPage3.Controls.Add(this.bt_file_html); + this.tabPage3.Controls.Add(this.bt_file_ascii); + this.tabPage3.Location = new System.Drawing.Point(4, 22); + this.tabPage3.Name = "tabPage3"; + this.tabPage3.Size = new System.Drawing.Size(1291, 553); + this.tabPage3.TabIndex = 2; + this.tabPage3.Text = "File"; + this.tabPage3.UseVisualStyleBackColor = true; + // + // button2 + // + this.button2.Location = new System.Drawing.Point(17, 218); + this.button2.Name = "button2"; + this.button2.Size = new System.Drawing.Size(137, 23); + this.button2.TabIndex = 12; + this.button2.Text = "Cancel"; + this.button2.UseVisualStyleBackColor = true; + this.button2.Click += new System.EventHandler(this.button_cancelimg_Click); + // + // bt_openrxfile + // + this.bt_openrxfile.Location = new System.Drawing.Point(17, 306); + this.bt_openrxfile.Name = "bt_openrxfile"; + this.bt_openrxfile.Size = new System.Drawing.Size(137, 33); + this.bt_openrxfile.TabIndex = 11; + this.bt_openrxfile.Text = "Open RX file folder"; + this.bt_openrxfile.UseVisualStyleBackColor = true; + this.bt_openrxfile.Click += new System.EventHandler(this.bt_openrxfile_Click); + // + // label_rxfile + // + this.label_rxfile.AutoSize = true; + this.label_rxfile.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.label_rxfile.Location = new System.Drawing.Point(749, 10); + this.label_rxfile.Name = "label_rxfile"; + this.label_rxfile.Size = new System.Drawing.Size(48, 13); + this.label_rxfile.TabIndex = 7; + this.label_rxfile.Text = "RX File"; + // + // label_txfile + // + this.label_txfile.AutoSize = true; + this.label_txfile.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.label_txfile.Location = new System.Drawing.Point(209, 10); + this.label_txfile.Name = "label_txfile"; + this.label_txfile.Size = new System.Drawing.Size(47, 13); + this.label_txfile.TabIndex = 6; + this.label_txfile.Text = "TX File"; + // + // rtb_RXfile + // + this.rtb_RXfile.Font = new System.Drawing.Font("Courier New", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.rtb_RXfile.Location = new System.Drawing.Point(736, 31); + this.rtb_RXfile.Name = "rtb_RXfile"; + this.rtb_RXfile.Size = new System.Drawing.Size(526, 508); + this.rtb_RXfile.TabIndex = 5; + this.rtb_RXfile.Text = ""; + // + // rtb_TXfile + // + this.rtb_TXfile.Font = new System.Drawing.Font("Courier New", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.rtb_TXfile.Location = new System.Drawing.Point(204, 31); + this.rtb_TXfile.Name = "rtb_TXfile"; + this.rtb_TXfile.Size = new System.Drawing.Size(526, 508); + this.rtb_TXfile.TabIndex = 4; + this.rtb_TXfile.Text = ""; + // + // bt_file_send + // + this.bt_file_send.Font = new System.Drawing.Font("Microsoft Sans Serif", 12F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.bt_file_send.ForeColor = System.Drawing.Color.Red; + this.bt_file_send.Location = new System.Drawing.Point(17, 157); + this.bt_file_send.Name = "bt_file_send"; + this.bt_file_send.Size = new System.Drawing.Size(137, 37); + this.bt_file_send.TabIndex = 3; + this.bt_file_send.Text = "SEND"; + this.bt_file_send.UseVisualStyleBackColor = true; + this.bt_file_send.Click += new System.EventHandler(this.bt_file_send_Click); + // + // bt_sendBinaryFile + // + this.bt_sendBinaryFile.Location = new System.Drawing.Point(17, 89); + this.bt_sendBinaryFile.Name = "bt_sendBinaryFile"; + this.bt_sendBinaryFile.Size = new System.Drawing.Size(137, 23); + this.bt_sendBinaryFile.TabIndex = 2; + this.bt_sendBinaryFile.Text = "Load Binary File"; + this.bt_sendBinaryFile.UseVisualStyleBackColor = true; + this.bt_sendBinaryFile.Click += new System.EventHandler(this.bt_sendBinaryFile_Click); + // + // bt_file_html + // + this.bt_file_html.Location = new System.Drawing.Point(17, 60); + this.bt_file_html.Name = "bt_file_html"; + this.bt_file_html.Size = new System.Drawing.Size(137, 23); + this.bt_file_html.TabIndex = 1; + this.bt_file_html.Text = "Load HTML File"; + this.bt_file_html.UseVisualStyleBackColor = true; + this.bt_file_html.Click += new System.EventHandler(this.button2_Click); + // + // bt_file_ascii + // + this.bt_file_ascii.Location = new System.Drawing.Point(17, 31); + this.bt_file_ascii.Name = "bt_file_ascii"; + this.bt_file_ascii.Size = new System.Drawing.Size(137, 23); + this.bt_file_ascii.TabIndex = 0; + this.bt_file_ascii.Text = "Load ASCII Text File"; + this.bt_file_ascii.UseVisualStyleBackColor = true; + this.bt_file_ascii.Click += new System.EventHandler(this.bt_file_ascii_Click); + // + // tabPage5 + // + this.tabPage5.Controls.Add(this.textBox1); + this.tabPage5.Controls.Add(this.bt_shutdown); + this.tabPage5.Controls.Add(this.cb_savegoodfiles); + this.tabPage5.Controls.Add(this.cb_stampcall); + this.tabPage5.Controls.Add(this.tb_callsign); + this.tabPage5.Controls.Add(this.label1); + this.tabPage5.Location = new System.Drawing.Point(4, 22); + this.tabPage5.Name = "tabPage5"; + this.tabPage5.Size = new System.Drawing.Size(1291, 553); + this.tabPage5.TabIndex = 4; + this.tabPage5.Text = "Setup"; + this.tabPage5.UseVisualStyleBackColor = true; + // + // textBox1 + // + this.textBox1.BorderStyle = System.Windows.Forms.BorderStyle.None; + this.textBox1.Font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((byte)(0))); + this.textBox1.ForeColor = System.Drawing.Color.Red; + this.textBox1.Location = new System.Drawing.Point(379, 78); + this.textBox1.Multiline = true; + this.textBox1.Name = "textBox1"; + this.textBox1.Size = new System.Drawing.Size(259, 55); + this.textBox1.TabIndex = 5; + this.textBox1.Text = "before switching off the modem SBC\r\nclick here to avoid defective SD-cards.\r\nWAIT" + + " 1 minute before powering OFF the modem."; + // + // bt_shutdown + // + this.bt_shutdown.Location = new System.Drawing.Point(379, 49); + this.bt_shutdown.Name = "bt_shutdown"; + this.bt_shutdown.Size = new System.Drawing.Size(155, 23); + this.bt_shutdown.TabIndex = 4; + this.bt_shutdown.Text = "Shutdown Modem-SBC"; + this.bt_shutdown.UseVisualStyleBackColor = true; + this.bt_shutdown.Click += new System.EventHandler(this.bt_shutdown_Click); + // + // cb_savegoodfiles + // + this.cb_savegoodfiles.AutoSize = true; + this.cb_savegoodfiles.Checked = true; + this.cb_savegoodfiles.CheckState = System.Windows.Forms.CheckState.Checked; + this.cb_savegoodfiles.Location = new System.Drawing.Point(106, 136); + this.cb_savegoodfiles.Name = "cb_savegoodfiles"; + this.cb_savegoodfiles.Size = new System.Drawing.Size(159, 17); + this.cb_savegoodfiles.TabIndex = 3; + this.cb_savegoodfiles.Text = "Save good files/images only"; + this.cb_savegoodfiles.UseVisualStyleBackColor = true; + // + // cb_stampcall + // + this.cb_stampcall.AutoSize = true; + this.cb_stampcall.Checked = true; + this.cb_stampcall.CheckState = System.Windows.Forms.CheckState.Checked; + this.cb_stampcall.Location = new System.Drawing.Point(106, 113); + this.cb_stampcall.Name = "cb_stampcall"; + this.cb_stampcall.Size = new System.Drawing.Size(146, 17); + this.cb_stampcall.TabIndex = 2; + this.cb_stampcall.Text = "Insert Callsign into picture"; + this.cb_stampcall.UseVisualStyleBackColor = true; + // + // tb_callsign + // + this.tb_callsign.CharacterCasing = System.Windows.Forms.CharacterCasing.Upper; + this.tb_callsign.Location = new System.Drawing.Point(106, 49); + this.tb_callsign.Name = "tb_callsign"; + this.tb_callsign.Size = new System.Drawing.Size(151, 20); + this.tb_callsign.TabIndex = 1; + // + // label1 + // + this.label1.AutoSize = true; + this.label1.Location = new System.Drawing.Point(49, 52); + this.label1.Name = "label1"; + this.label1.Size = new System.Drawing.Size(46, 13); + this.label1.TabIndex = 0; + this.label1.Text = "Callsign:"; + // + // cb_speed + // + this.cb_speed.FormattingEnabled = true; + this.cb_speed.Items.AddRange(new object[] { + "3000 QPSK BW: 1800 Hz ", + "3150 QPSK BW: 1900 Hz ", + "3675 QPSK BW: 2200 Hz ", + "4000 QPSK BW: 2400 Hz ", + "4410 QPSK BW: 2700 Hz (default QO-100)", + "4800 QPSK BW: 2900 Hz (experimental)", + "5500 8PSK BW: 2300 Hz", + "6000 8PSK BW: 2500 Hz (QO-100 beacon)"}); + this.cb_speed.Location = new System.Drawing.Point(636, 644); + this.cb_speed.Name = "cb_speed"; + this.cb_speed.Size = new System.Drawing.Size(324, 21); + this.cb_speed.TabIndex = 11; + this.cb_speed.Text = "4410 QPSK BW: 2700 Hz (default QO-100)"; + this.cb_speed.SelectedIndexChanged += new System.EventHandler(this.comboBox1_SelectedIndexChanged); + // + // label_speed + // + this.label_speed.AutoSize = true; + this.label_speed.Location = new System.Drawing.Point(545, 647); + this.label_speed.Name = "label_speed"; + this.label_speed.Size = new System.Drawing.Size(71, 13); + this.label_speed.TabIndex = 12; + this.label_speed.Text = "Speed [bit/s]:"; + // + // timer_searchmodem + // + this.timer_searchmodem.Interval = 1000; + this.timer_searchmodem.Tick += new System.EventHandler(this.timer_searchmodem_Tick); + // + // Form1 + // + this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); + this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; + this.ClientSize = new System.Drawing.Size(1304, 691); + this.Controls.Add(this.cb_speed); + this.Controls.Add(this.label_speed); + this.Controls.Add(this.panel_txspectrum); + this.Controls.Add(this.panel_constel); + this.Controls.Add(this.statusStrip1); + this.Controls.Add(this.tabControl1); + this.ForeColor = System.Drawing.SystemColors.ControlText; + this.Icon = ((System.Drawing.Icon)(resources.GetObject("$this.Icon"))); + this.Name = "Form1"; + this.Text = "QO-100 NB Transponder HS Transmission AMSAT-DL V0.1 by DJ0ABR"; + this.FormClosing += new System.Windows.Forms.FormClosingEventHandler(this.Form1_FormClosing); + this.statusStrip1.ResumeLayout(false); + this.statusStrip1.PerformLayout(); + this.tabPage1.ResumeLayout(false); + this.tabPage2.ResumeLayout(false); + this.tabPage2.PerformLayout(); + this.groupBox1.ResumeLayout(false); + this.groupBox1.PerformLayout(); + ((System.ComponentModel.ISupportInitialize)(this.pictureBox_rximage)).EndInit(); + ((System.ComponentModel.ISupportInitialize)(this.pictureBox_tximage)).EndInit(); + this.tabControl1.ResumeLayout(false); + this.tabPage3.ResumeLayout(false); + this.tabPage3.PerformLayout(); + this.tabPage5.ResumeLayout(false); + this.tabPage5.PerformLayout(); + this.ResumeLayout(false); + this.PerformLayout(); + + } + + #endregion + + private System.Windows.Forms.Timer timer_udpTX; + private System.Windows.Forms.Timer timer_udprx; + private System.Windows.Forms.StatusStrip statusStrip1; + private System.Windows.Forms.ToolStripStatusLabel toolStripStatusLabel; + private System.Windows.Forms.Panel panel_constel; + private System.Windows.Forms.Timer timer_qpsk; + private System.Windows.Forms.Panel panel_txspectrum; + private System.Windows.Forms.TabPage tabPage1; + private System.Windows.Forms.Button button_stopBERtest; + private System.Windows.Forms.Button button_startBERtest; + private System.Windows.Forms.RichTextBox rtb; + private System.Windows.Forms.TabPage tabPage2; + private System.Windows.Forms.ComboBox comboBox_quality; + private System.Windows.Forms.Button button_loadimage; + private System.Windows.Forms.Button button_cancelimg; + private System.Windows.Forms.Button button_sendimage; + private System.Windows.Forms.CheckBox checkBox_big; + private System.Windows.Forms.Label label2; + private System.Windows.Forms.Label label_rximage; + private System.Windows.Forms.Label label_tximage; + private System.Windows.Forms.PictureBox pictureBox_rximage; + private System.Windows.Forms.PictureBox pictureBox_tximage; + private System.Windows.Forms.TabControl tabControl1; + private System.Windows.Forms.ToolStripStatusLabel ts_ip; + private System.Windows.Forms.Panel groupBox1; + private System.Windows.Forms.TabPage tabPage3; + private System.Windows.Forms.RichTextBox rtb_TXfile; + private System.Windows.Forms.Button bt_file_send; + private System.Windows.Forms.Button bt_sendBinaryFile; + private System.Windows.Forms.Button bt_file_html; + private System.Windows.Forms.Button bt_file_ascii; + private System.Windows.Forms.RichTextBox rtb_RXfile; + private System.Windows.Forms.Label label_rxfile; + private System.Windows.Forms.Label label_txfile; + private System.Windows.Forms.ToolStripStatusLabel RXstatus; + private System.Windows.Forms.ComboBox cb_speed; + private System.Windows.Forms.Label label_speed; + private System.Windows.Forms.Timer timer_searchmodem; + private System.Windows.Forms.Button bt_rximages; + private System.Windows.Forms.Button bt_openrxfile; + private System.Windows.Forms.CheckBox cb_loop; + private System.Windows.Forms.Label label_nextimage; + private System.Windows.Forms.Button button2; + private System.Windows.Forms.TabPage tabPage5; + private System.Windows.Forms.TextBox tb_callsign; + private System.Windows.Forms.Label label1; + private System.Windows.Forms.CheckBox cb_stampcall; + private System.Windows.Forms.CheckBox cb_savegoodfiles; + private System.Windows.Forms.TextBox textBox1; + private System.Windows.Forms.Button bt_shutdown; + } +} + diff --git a/oscardata/oscardata/Form1.cs b/oscardata/oscardata/Form1.cs new file mode 100755 index 0000000..a6d11ea --- /dev/null +++ b/oscardata/oscardata/Form1.cs @@ -0,0 +1,1330 @@ +/* +* High Speed modem to transfer data in a 2,7kHz SSB channel +* ========================================================= +* Author: DJ0ABR +* +* (c) DJ0ABR +* www.dj0abr.de +* +* This program is free software; you can redistribute it and/or modify +* it under the terms of the GNU General Public License as published by +* the Free Software Foundation; either version 2 of the License, or +* (at your option) any later version. +* +* This program is distributed in the hope that it will be useful, +* but WITHOUT ANY WARRANTY; without even the implied warranty of +* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +* GNU General Public License for more details. +* +* You should have received a copy of the GNU General Public License +* along with this program; if not, write to the Free Software +* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +* +*/ + +using System; +using System.Windows.Forms; +using System.Drawing; +using System.Text; +using System.IO; +using System.Drawing.Drawing2D; +using System.Diagnostics; + +namespace oscardata +{ + public partial class Form1 : Form + { + Imagehandler ih = new Imagehandler(); + int txcommand = 0; // commands what to send + int txframecounter = 0; + Byte frameinfo = (Byte)statics.FirstFrame; + String TXfilename; + int rxbytecounter = 0; + DateTime starttime; + String old_tsip = ""; + + public Form1() + { + // init GUI + InitializeComponent(); + + // test OS type + OperatingSystem osversion = System.Environment.OSVersion; + statics.OSversion = osversion.Platform.ToString(); + if (osversion.VersionString.Contains("indow")) + statics.ostype = 0; + else + statics.ostype = 1; // Linux + + // set temp paths + statics.zip_TXtempfilename = statics.addTmpPath(statics.zip_TXtempfilename); + statics.zip_RXtempfilename = statics.addTmpPath(statics.zip_RXtempfilename); + statics.jpg_tempfilename = statics.addTmpPath(statics.jpg_tempfilename); + + load_Setup(); + + checkBox_small_CheckedChanged(null, null); + + // init speed + comboBox1_SelectedIndexChanged(null,null); + + // create Udp Communication ports and init UDP system + Udp.InitUdp(); + search_modem(); + ArraySend.ArraySendInit(); + + // enable processing + timer_udpTX.Enabled = true; + timer_udprx.Enabled = true; + timer_searchmodem.Enabled = true; + + //pictureBox_rximage.BackgroundImage = Image.FromFile("/tmp/temp293.jpg"); + } + + // TX timer + int loopdelay = 0; + private void timer1_Tick(object sender, EventArgs e) + { + // BER testdata + if (txcommand == statics.BERtest) + { + if (Udp.GetBufferCount() > 3) return; + + Byte[] txdata = new byte[statics.PayloadLen+2]; + + txdata[0] = (Byte)statics.BERtest; // BER Test Marker + txdata[1] = frameinfo; + + Byte tb = (Byte)'A'; + for (int i = 2; i < txdata.Length; i++) + { + txdata[i] = tb; + tb++; + if (tb == 'z') tb = (Byte)'A'; + } + + // and transmit it + Udp.UdpSend(txdata); + + frameinfo = (Byte)statics.NextFrame; + txframecounter++; + } + + if (ArraySend.getSending()) + { + button_loadimage.Enabled = false; + button_sendimage.Enabled = false; + } + else + { + button_loadimage.Enabled = true; + if (TXimagefilename != "") + button_sendimage.Enabled = true; + else + button_sendimage.Enabled = false; + } + + if (TXfoldername == "" || lastFullName == "") + cb_loop.Enabled = false; + else + cb_loop.Enabled = true; + + ShowTXstatus(); + + if (txcommand == statics.Image) + { + // if "loop" is selected send the next image in folder + if (cb_loop.Checked) + { + // check if we are ready with any transmission + if (ArraySend.getSending() == false) + { + // this timer runs with 10ms + // after an image was finished, wait before starting the new one + // this helps cleaning any buffer + int spacetime = 20000; // ms + label_nextimage.Text = "next image in " + ((spacetime / timer_udpTX.Interval - loopdelay) / 10).ToString() + " s"; + if (++loopdelay > (spacetime / timer_udpTX.Interval)) + { + // start sending a new picture + startNextImage(); + } + } + else + { + loopdelay = 0; + label_nextimage.Text = "transmitting"; + } + } + else + label_nextimage.Text = ""; + } + else + label_nextimage.Text = ""; + + if (ts_ip.Text.Contains("?") || ts_ip.Text.Contains("1.2.3.4") || old_tsip != statics.ModemIP) + { + if (statics.ModemIP == "1.2.3.4") + ts_ip.Text = "Modem-IP: ?"; + else + { + ts_ip.Text = "Modem-IP: " + statics.ModemIP; + old_tsip = statics.ModemIP; + comboBox1_SelectedIndexChanged(null, null); // send speed to modem + } + } + } + + private void Form1_FormClosing(object sender, FormClosingEventArgs e) + { + save_Setup(); + // exit the threads + statics.running = false; + Udp.Close(); + } + + // RX timer + int rxstat = 0; + int speed; + int tmpnum = 0; + int file_lostframes = 0; + private void timer_udprx_Tick(object sender, EventArgs e) + { + while (true) + { + Byte[] rxd = Udp.UdpReceive(); + if (rxd == null) break; + + // these status information are added by the unpack routine + int rxtype = rxd[0]; + int rxfrmnum = rxd[1]; + rxfrmnum <<= 8; + rxfrmnum += rxd[2]; + int minfo = rxd[3]; + rxstat = rxd[4]; + speed = rxd[5]; + speed <<= 8; + speed += rxd[6]; + int dummy3 = rxd[7]; + int dummy4 = rxd[8]; + int dummy5 = rxd[9]; + + if (rxstat == 4) + { + framelost++; + file_lostframes++; + } + calcBer(rxfrmnum); + + if (minfo == statics.FirstFrame) + file_lostframes = 0; + + Byte[] rxdata = new byte[rxd.Length - 10]; + Array.Copy(rxd, 10, rxdata, 0, rxd.Length - 10); + + //Console.WriteLine("minfo:" + minfo + " data:" + rxdata[0].ToString("X2") + " " + rxdata[1].ToString("X2")); + + if (minfo == statics.FirstFrame) + { + rxbytecounter = rxdata.Length; + starttime = DateTime.UtcNow; + } + else + { + rxbytecounter += rxdata.Length; + } + TimeSpan ts = DateTime.UtcNow - starttime; + ts += new TimeSpan(0, 0, 0, 1); + + // ===== ASCII RX ================================================ + if (rxtype == statics.AsciiFile) + { + // if this is the first frame of a file transfer + // then read and remove the file info header + if (minfo == statics.FirstFrame || minfo == statics.SingleFrame) + { + //Console.WriteLine("first, single"); + rxdata = ArraySend.GetAndRemoveHeader(rxdata); + if (rxdata == null) return; + } + + // collect all received data into zip_RXtempfilename + Byte[] ba = null; + Byte[] nba; + try + { + ba = File.ReadAllBytes(statics.zip_RXtempfilename); + } + catch { } + + if (ba != null) + { + //Console.WriteLine("write next"); + nba = new Byte[ba.Length + rxdata.Length]; + Array.Copy(ba, nba, ba.Length); + Array.Copy(rxdata, 0, nba, ba.Length, rxdata.Length); + } + else + { + //Console.WriteLine("write first"); + nba = new Byte[rxdata.Length]; + Array.Copy(rxdata, nba, rxdata.Length); + } + File.WriteAllBytes(statics.zip_RXtempfilename, nba); + long filesize = 0; + + // check if transmission is finished + if (minfo == statics.LastFrame || minfo == statics.SingleFrame) + { + // statics.zip_RXtempfilename has the received data, but maybee too long (multiple of payload length) + // reduce for the real file length + Byte[] fc = File.ReadAllBytes(statics.zip_RXtempfilename); + Byte[] fdst = new byte[ArraySend.FileSize]; + Array.Copy(fc, 0, fdst, 0, ArraySend.FileSize); + File.WriteAllBytes(statics.zip_RXtempfilename, fdst); + + //Console.WriteLine("size:"+ ArraySend.FileSize.ToString()); + + //Console.WriteLine("last"); + // unzip received data and store result in file: unzipped_RXtempfilename + rtb_RXfile.Text = ""; + ZipStorer zs = new ZipStorer(); + String fl = zs.unzipFile(statics.zip_RXtempfilename); + if (fl != null) + { + // save file + int idx = fl.LastIndexOf('/'); + if (idx == -1) idx = fl.LastIndexOf('\\'); + String fdest = fl.Substring(idx + 1); + fdest = statics.getHomePath("", fdest); + try { File.Delete(fdest); } catch { } + File.Move(fl, fdest); + filesize = statics.GetFileSize(fdest); + + String serg = File.ReadAllText(fdest); + printText(rtb_RXfile, serg); + } + else + printText(rtb_RXfile, "unzip failed"); + File.Delete(statics.zip_RXtempfilename); + } + + int rest = ArraySend.FileSize - rxbytecounter; + if (rest < 0) rest = 0; + if (rest > 0) + label_rxfile.Text = "RX file: " + ArraySend.rxFilename + " " + rest.ToString() + " bytes"; + else + label_rxfile.Text = "RX file: " + ArraySend.rxFilename + " " + filesize + " bytes"; + + if (minfo == statics.LastFrame) + ShowStatus((int)filesize, (int)ts.TotalSeconds); + else + ShowStatus(rxbytecounter, (int)ts.TotalSeconds); + } + + // ===== HTML File RX ================================================ + if (rxtype == statics.HTMLFile) + { + // if this is the first frame of a file transfer + // then read and remove the file info header + if (minfo == statics.FirstFrame) + { + rxdata = ArraySend.GetAndRemoveHeader(rxdata); + if (rxdata == null) return; + } + + Byte[] ba = null; + Byte[] nba; + try + { + ba = File.ReadAllBytes(statics.zip_RXtempfilename); + } + catch { } + + if (ba != null) + { + nba = new Byte[ba.Length + rxdata.Length]; + Array.Copy(ba, nba, ba.Length); + Array.Copy(rxdata, 0, nba, ba.Length, rxdata.Length); + } + else + { + nba = new Byte[rxdata.Length]; + Array.Copy(rxdata, nba, rxdata.Length); + } + File.WriteAllBytes(statics.zip_RXtempfilename, nba); + long filesize = 0; + if (minfo == statics.LastFrame) + { + // unzip received data + rtb_RXfile.Text = ""; + ZipStorer zs = new ZipStorer(); + // unzip returns filename+path of unzipped file + String fl = zs.unzipFile(statics.zip_RXtempfilename); + if (fl != null) + { + // save file + int idx = fl.LastIndexOf('/'); + if (idx == -1) idx = fl.LastIndexOf('\\'); + String fdest = fl.Substring(idx + 1); + fdest = statics.getHomePath("", fdest); + try { File.Delete(fdest); } catch { } + File.Move(fl, fdest); + filesize = statics.GetFileSize(fdest); + + rxbytecounter = (int)statics.GetFileSize(fdest); + String serg = File.ReadAllText(fdest); + printText(rtb_RXfile, serg); + try + { + OpenUrl(fdest); + } + catch (Exception ex) + { + Console.WriteLine(ex.ToString()); + } + } + else + printText(rtb_RXfile, "unzip failed"); + } + + int rest = ArraySend.FileSize - rxbytecounter; + if (rest < 0) rest = 0; + if (rest > 0) + label_rxfile.Text = "RX file: " + ArraySend.rxFilename + " " + rest.ToString() + " bytes"; + else + label_rxfile.Text = "RX file: " + ArraySend.rxFilename + " " + filesize + " bytes"; + + if (minfo == statics.LastFrame) + ShowStatus(ArraySend.FileSize, (int)ts.TotalSeconds); + else + ShowStatus(rxbytecounter, (int)ts.TotalSeconds); + } + + // ===== Binary File RX ================================================ + if (rxtype == statics.BinaryFile) + { + // if this is the first frame of a file transfer + // then read and remove the file info header + if (minfo == statics.FirstFrame || minfo == statics.SingleFrame) + { + //Console.WriteLine("first, single"); + rxdata = ArraySend.GetAndRemoveHeader(rxdata); + if (rxdata == null) return; + } + + // collect all received data into zip_RXtempfilename + Byte[] ba = null; + Byte[] nba; + try + { + ba = File.ReadAllBytes(statics.zip_RXtempfilename); + } + catch { } + + if (ba != null) + { + //Console.WriteLine("write next"); + nba = new Byte[ba.Length + rxdata.Length]; + Array.Copy(ba, nba, ba.Length); + Array.Copy(rxdata, 0, nba, ba.Length, rxdata.Length); + } + else + { + //Console.WriteLine("write first"); + nba = new Byte[rxdata.Length]; + Array.Copy(rxdata, nba, rxdata.Length); + } + File.WriteAllBytes(statics.zip_RXtempfilename, nba); + long filesize = 0; + + // check if transmission is finished + if (minfo == statics.LastFrame || minfo == statics.SingleFrame) + { + // statics.zip_RXtempfilename has the received data, but maybee too long (multiple of payload length) + // reduce for the real file length + Byte[] fc = File.ReadAllBytes(statics.zip_RXtempfilename); + Byte[] fdst = new byte[ArraySend.FileSize]; + Array.Copy(fc, 0, fdst, 0, ArraySend.FileSize); + File.WriteAllBytes(statics.zip_RXtempfilename, fdst); + + //Console.WriteLine("last"); + // unzip received data and store result in file: unzipped_RXtempfilename + rtb_RXfile.Text = ""; + ZipStorer zs = new ZipStorer(); + String fl = zs.unzipFile(statics.zip_RXtempfilename); + if (fl != null) + { + int idx = fl.LastIndexOf('/'); + if(idx == -1) idx = fl.LastIndexOf('\\'); + String fdest = fl.Substring(idx + 1); + fdest = statics.getHomePath("", fdest); + try { File.Delete(fdest); } catch { } + File.Move(fl, fdest); + filesize = statics.GetFileSize(fdest); + //File.WriteAllBytes(fl, nba); + printText(rtb_RXfile, "binary file received\r\n"); + printText(rtb_RXfile, "--------------------\r\n\r\n"); + printText(rtb_RXfile, "file size : " + filesize + " byte\r\n\r\n"); + printText(rtb_RXfile, "stored in : " + fdest + "\r\n\r\n"); + printText(rtb_RXfile, "transmission time : " + ((int)ts.TotalSeconds).ToString() + " seconds" + "\r\n\r\n"); + printText(rtb_RXfile, "transmission speed: " + ((int)(filesize*8/ts.TotalSeconds)).ToString() + " bit/s" + "\r\n\r\n"); + } + else + printText(rtb_RXfile, "unzip failed"); + File.Delete(statics.zip_RXtempfilename); + } + + int rest = ArraySend.FileSize - rxbytecounter; + if (rest < 0) rest = 0; + if (rest > 0) + label_rxfile.Text = "RX file: " + ArraySend.rxFilename + " " + rest.ToString() + " bytes"; + else + label_rxfile.Text = "RX file: " + ArraySend.rxFilename + " " + filesize + " bytes"; + + if (minfo == statics.LastFrame) + ShowStatus((int)filesize, (int)ts.TotalSeconds); + else + ShowStatus(rxbytecounter, (int)ts.TotalSeconds); + } + + // ===== IMAGE RX ================================================ + if (rxtype == statics.Image) + { + // if this is the first frame of a file transfer + // then read and remove the file info header + if (minfo == statics.FirstFrame) + { + rxdata = ArraySend.GetAndRemoveHeader(rxdata); + if (rxdata == null) return; + } + + ih.receive_image(rxdata, minfo); + + // show currect contents of rxtemp.jpg in RX picturebox + try + { + String fn = statics.addTmpPath("temp" + tmpnum.ToString() + ".jpg"); + try + { + File.Delete(fn); + } + catch { } + tmpnum++; + fn = statics.addTmpPath("temp" + tmpnum.ToString() + ".jpg"); + File.Copy(statics.jpg_tempfilename, fn); + + try + { + if(statics.GetFileSize(fn) > 1200) + pictureBox_rximage.BackgroundImage = Image.FromFile(fn); + } + catch { + } + + if (minfo == statics.LastFrame) + { + // file is complete, save in RX storage + // remove possible path from filename + String fname = ArraySend.rxFilename; + int idx = fname.IndexOfAny(new char[] {'\\','/' }); + if (idx != -1) + { + try + { + fname = fname.Substring(idx + 1); + } catch{ } + } + if (!cb_savegoodfiles.Checked || (file_lostframes == 0 && cb_savegoodfiles.Checked)) + { + // add home path and RXstorage path + String fnx = statics.getHomePath(statics.RXimageStorage, fname); + File.Copy(fn, fnx); + } + } + } + catch { } + + int rest = ArraySend.FileSize - rxbytecounter; + if (rest < 0) rest = 0; + if(rest > 0) + label_rximage.Text = "RX image: " + ArraySend.rxFilename + " " + rest.ToString() + " bytes"; + else + label_rximage.Text = "RX image: " + ArraySend.rxFilename; + ShowStatus(rxbytecounter, (int)ts.TotalSeconds); + } + + // ===== BER Test ================================================ + if (rxtype == statics.BERtest) + { + RXstatus.Text = "BER: " + ber.ToString("E3") + " " + rxframecounter.ToString() + " frames received OK"; + + BERcheck(rxdata); + } + } + } + + private void OpenUrl(string url) + { + try + { + Process.Start(url); + } + catch + { + // hack because of this: https://github.com/dotnet/corefx/issues/10361 + if (statics.ostype == 0) + { + url = url.Replace("&", "^&"); + Process.Start(new ProcessStartInfo("cmd", $"/c start {url}") { CreateNoWindow = true }); + } + else + { + Process.Start("xdg-open", url); + } + } + } + + private void timer_qpsk_Tick(object sender, EventArgs e) + { + panel_constel.Invalidate(); + panel_txspectrum.Invalidate(); + } + + private void panel_constel_Paint(object sender, PaintEventArgs e) + { + Pen pen = new Pen(Brushes.LightGray); + e.Graphics.DrawEllipse(pen, 0, 0, panel_constel.Size.Width-1, panel_constel.Size.Height-1); + e.Graphics.DrawLine(pen, panel_constel.Size.Width / 2, 0, panel_constel.Size.Width / 2, panel_constel.Size.Height); + e.Graphics.DrawLine(pen, 0, panel_constel.Size.Height / 2, panel_constel.Size.Width, panel_constel.Size.Height/2); + + while (true) + { + qpskitem qi = Udp.UdpGetIQ(); + if (qi == null) break; + + // re and im are in the range of +/- 2^24 (16777216) + // scale it to +/- 128 + double fre = qi.re; + double fim = qi.im; + + fre = fre * panel_constel.Size.Width / 2 / 16777216.0; + fim = fim * panel_constel.Size.Width / 2 / 16777216.0; + + // scale it to the picture + int x = panel_constel.Size.Width / 2 + (int)fre - 2; + int y = panel_constel.Size.Height / 2 + (int)fim - 2; + + e.Graphics.FillEllipse(Brushes.Blue, x, y, 2, 2); + } + } + + static Brush brred = new SolidBrush(Color.FromArgb(255, (byte)255, (byte)220, (byte)220)); + static Brush brgreen = new SolidBrush(Color.FromArgb(255, (byte)240, (byte)255, (byte)240)); + static Brush brgray = new SolidBrush(Color.FromArgb(255, (byte)220, (byte)220, (byte)220)); + static Pen pen = new Pen(Brushes.Black); + static Pen penblue = new Pen(Brushes.Blue, 2); + static Pen pengrey = new Pen(brgray, 1); + Font fnt = new Font("Verdana", 8.0f); + Font smallfnt = new Font("Verdana", 6.0f); + + private void panel_txspectrum_Paint(object sender, PaintEventArgs e) + { + int miny = 200; + int maxy = 2800; + + // horizontal level markers + Point ps = GetFFTPos(0, 0); + Point pe = GetFFTPos(maxxval, maxyval); + int pw = pe.X - ps.X; + int ph = ps.Y - pe.Y; + e.Graphics.FillRectangle(brred, ps.X, pe.Y, pw, ph); + + ps = GetFFTPos(miny/10, 700); + pe = GetFFTPos(maxy/10, 2300); + pw = pe.X - ps.X; + ph = ps.Y - pe.Y; + e.Graphics.FillRectangle(brgreen, ps.X, pe.Y, pw, ph); + + // Coordinates + e.Graphics.DrawLine(pen, GetFFTPos(0, 0), GetFFTPos(maxxval, 0)); + e.Graphics.DrawLine(pen, GetFFTPos(0, 0), GetFFTPos(0, maxyval)); + + // vertical frequency markers for 2.7kHz + for (int i = miny; i <= maxy; i+=100) + { + e.Graphics.DrawLine(pengrey, GetFFTPos(i / 10, 0), GetFFTPos(i / 10, maxyval)); + } + + // Title + e.Graphics.DrawString("Tuning Window", fnt, Brushes.Black, GetFFTPos(110, 3000)); + e.Graphics.DrawString(miny.ToString() + " Hz", smallfnt, Brushes.Black, GetFFTPos(5, 2800)); + e.Graphics.DrawString("1500 Hz", smallfnt, Brushes.Black, GetFFTPos(138, 680)); + e.Graphics.DrawString(maxy.ToString() + " Hz", smallfnt, Brushes.Black, GetFFTPos(270, 2800)); + + e.Graphics.DrawString("min Level", smallfnt, Brushes.Black, GetFFTPos(290, 1000)); + e.Graphics.DrawString("max", smallfnt, Brushes.Black, GetFFTPos(290, 2450)); + + while (true) + { + UInt16[] da = Udp.UdpGetFFT(); + if (da == null) break; + if (da.Length < maxxval) return; + Fftmean(da); + } + + // da are the FFT data + // from 0 Hz to 4410 Hz with a resolution of 10 Hz + // so we get 441 values + // there may be 442, just ignore the last one + GraphicsPath gp = new GraphicsPath(); + + // calculate mean value and calc mean value over all values + UInt16[] su = new UInt16[maxxval+1]; + for (int i = 0; i < maxxval; i++) + { + su[i] = 0; + for(int j=0; j< meansize; j++) + su[i] += dam[j, i]; + su[i] /= (UInt16)meansize; + } + + // scale and X-mean + int lastu = 0; + for (int i = 0; i < maxxval; i++) + { + UInt16 u = 0; + if (i >= 1 && i < maxxval - 1) + u = (UInt16)((su[i - 1] + su[i] + su[i + 1]) / 3); + else + u = su[i]; + + u *= 3; + gp.AddLine(GetFFTPos(i, lastu), GetFFTPos(i + 1, u)); + lastu = u; + } + + e.Graphics.DrawPath(penblue, gp); + } + + private UInt16[,] dam = new UInt16[meansize, maxxval]; + + private void Fftmean(UInt16[] v) + { + for (int sh = meansize - 1; sh > 0; sh--) + for (int i = 0; i < maxxval; i++) + dam[sh, i] = dam[sh - 1, i]; + + for (int i = 0; i < maxxval; i++) + dam[0, i] = v[i]; + } + + readonly static int meansize = 20; + readonly static int maxxval = (statics.real_datarate / 10) * 6 / 10; + readonly int maxyval = 3000; + + private Point GetFFTPos(int x, int y) + { + int leftMargin = 2; + int rightMargin = 2; + int topMargin = 2; + int bottomMargin =2; + + int xsize = panel_txspectrum.Size.Width; + int newx = (x * (xsize - leftMargin - rightMargin)) / maxxval; + newx += leftMargin; + + int ysize = panel_txspectrum.Size.Height; + int newy = (y * (ysize - topMargin - bottomMargin)) / maxyval; + newy += bottomMargin; + newy = ysize - newy; + + Point p = new Point(newx, newy); + return p; + } + + void printText(RichTextBox rtb, String s) + { + AppendTextOnce(rtb, new Font("Courier New", (float)8), Color.Blue, Color.White, s); + } + + void AppendTextOnce(RichTextBox rtb, Font selfont, Color color, Color bcolor, string text) + { + try + { + if (text.Contains("\n")) + { + char[] ca = new char[] { '\n', '\r' }; + + text = text.Trim(ca); + text += "\n"; + } + + // max. xxx Zeilen, wenn mehr dann lösche älteste + if (rtb.Lines.Length > 200) + { + rtb.SelectionStart = 0; + rtb.SelectionLength = rtb.Text.IndexOf("\n", 0) + 1; + rtb.SelectedText = ""; + } + + int start = rtb.TextLength; + rtb.AppendText(text); + int end = rtb.TextLength; + + // Textbox may transform chars, so (end-start) != text.Length + rtb.Select(start, end - start); + rtb.SelectionColor = color; + rtb.SelectionFont = selfont; + rtb.SelectionBackColor = bcolor; + rtb.Select(end, 0); + + rtb.ScrollToCaret(); + } + catch (Exception e) + { + Console.WriteLine(e.ToString()); + } + } + + String TXimagefilename = ""; + String TXRealFilename = ""; + long TXRealFileSize = 0; + String TXfoldername = ""; + String lastFullName = ""; + + // prepare an image file for transmission + void prepareImage(String fullfn) + { + if (statics.checkImage(fullfn) == false) return; + + // all images are converted to jpg, make the new filename + TXfoldername = statics.purePath(fullfn); + TXRealFilename = statics.pureFilename(fullfn); + TXRealFilename = statics.AddReplaceFileExtension(TXRealFilename,"jpg"); + lastFullName = fullfn; + + // random filename for picturebox control (picturebox cannot reload image from actual filename) + try { File.Delete(TXimagefilename); } catch { } + Random randNum = new Random(); + TXimagefilename = statics.addTmpPath("tempTX" + randNum.Next(0, 65000).ToString() + ".jpg"); + + // get the quality selected by the user + String qual = comboBox_quality.Text; + long max_size = 22500; + if (qual.Contains("30s")) max_size = 12000; + if (qual.Contains("2min")) max_size = 45000; + if (qual.Contains("4min")) max_size = 90000; + + // resize image and save it according the quality settings + Image img = new Bitmap(fullfn); + String cs = tb_callsign.Text; + if (cb_stampcall.Checked == false) cs = ""; + if (!checkBox_big.Checked) + { + img = ih.ResizeImage(img, 320, 240, cs); + // set quality by reducing the file size and save under default name + ih.SaveJpgAtFileSize(img, TXimagefilename, max_size / 2); + } + else + { + img = ih.ResizeImage(img, 640, 480, cs); + // set quality by reducing the file size and save under default name + ih.SaveJpgAtFileSize(img, TXimagefilename, max_size); + } + pictureBox_tximage.Load(TXimagefilename); + TXRealFileSize = statics.GetFileSize(TXimagefilename); + ShowTXstatus(); + txcommand = statics.Image; + } + + void ShowTXstatus() + { + if(txcommand == statics.Image) + label_tximage.Text = "TX image: " + TXRealFilename + ". Sent: " + (ArraySend.txpos / 1000).ToString() + " of " + (TXRealFileSize / 1000).ToString() + " kB"; + else + label_txfile.Text = "TX file: " + TXRealFilename + ". Sent: " + (ArraySend.txpos / 1000).ToString() + " of " + (TXRealFileSize / 1000).ToString() + " kB"; + } + + // in loop mode only: send the next picture in current image folder + void startNextImage() + { + if (TXfoldername == "" || lastFullName == "") return; + + // read all file from folder + String[] files = Directory.GetFiles(TXfoldername); + Array.Sort(files); + int i; + bool found = false; + for(i=0; i= 1) + rspeed = rxbytecounter * 8 / totalseconds; + RXstatus.Text = "received " + rxbytecounter + " byte " + totalseconds + " s, " + rspeed + " bit/s"; + } + + private void button_cancelimg_Click(object sender, EventArgs e) + { + //txcommand = statics.noTX; // finished + label_rximage.ForeColor = Color.Black; + pictureBox_rximage.Image = null; + ArraySend.stopSending(); + } + + private void checkBox_small_CheckedChanged(object sender, EventArgs e) + { + // scale all elements + // this is required if a scaled screen resolution is used für large 4k monitor, important under mono + // since mono fails in automatic scaling if the screen resolution is different from 1:1 + label_rximage.Location = new Point(6, 7); + label_rximage.Location = new Point(650, 7); + pictureBox_tximage.Size = new Size(640,480); + pictureBox_rximage.Size = new Size(640,480); + int yPicBoxes = label_rximage.Location.Y + label_rximage.Size.Height + 3; + pictureBox_tximage.Location = new Point(1, yPicBoxes); + pictureBox_rximage.Location = new Point(642, yPicBoxes); + + int gb_yloc = pictureBox_tximage.Location.Y + pictureBox_tximage.Size.Height + 3; + groupBox1.Location = new Point(3, gb_yloc); + + tabControl1.Size = new Size(pictureBox_tximage.Size.Width + pictureBox_rximage.Size.Width + 10, + label_rximage.Size.Height + 3 + pictureBox_rximage.Size.Height + 3 + groupBox1.Size.Height + 3 + 20); + + int rxpan_yloc = tabControl1.Location.Y + tabControl1.Size.Height + 3; + panel_constel.Location = new Point(11, rxpan_yloc); + panel_constel.Size = new Size(75,75); + + panel_txspectrum.Location = new Point(92, rxpan_yloc); + panel_txspectrum.Size = new Size(441,75); + + rtb.Size = new Size(tabControl1.Size.Width - 30, tabControl1.Size.Height - button_startBERtest.Location.Y - button_startBERtest.Size.Height - 44); + + this.Size = new Size(tabControl1.Size.Width + 23, rxpan_yloc + panel_constel.Size.Height + statusStrip1.Size.Height + 42); + + int xf = bt_file_ascii.Location.X + bt_file_ascii.Size.Width + 20; + int yf = bt_file_ascii.Location.Y; + rtb_TXfile.Location = new Point(xf, yf); + + int mw = tabControl1.Size.Width - bt_file_ascii.Size.Width - 80; + mw /= 2; + int mh = tabControl1.Size.Height - bt_file_ascii.Size.Height - 50; + rtb_TXfile.Size = new Size(mw, mh); + + xf += mw + 5; + rtb_RXfile.Location = new Point(xf, yf); + rtb_RXfile.Size = new Size(mw, mh); + + int ly = rtb_TXfile.Location.Y / 4; + label_txfile.Location = new Point(rtb_TXfile.Location.X, ly); + label_rxfile.Location = new Point(rtb_RXfile.Location.X, ly); + + label_speed.Location = new Point(panel_txspectrum.Location.X + panel_txspectrum.Size.Width + 20,panel_txspectrum.Location.Y+10); + cb_speed.Location = new Point(label_speed.Location.X + label_speed.Size.Width + 10, label_speed.Location.Y-5); + } + + public String GetMyBroadcastIP() + { + String ip = "255.255.255.255"; + String[] myips = statics.getOwnIPs(); + //Console.WriteLine("BClen: " + myips.Length.ToString()); + // nur wenn der PC eine IP hat + // hat er mehr, dann wissen wir nicht in welchem Netz wird broadcasten sollen, nehmen also die 255.255.255.255 + if (myips.Length == 1) + { + int idx = myips[0].LastIndexOf('.'); + if (idx >= 0) + { + ip = myips[0].Substring(0, idx); + ip += ".255"; + //Console.WriteLine("BCip: " + ip); + } + } + return ip; + } + + /* + * search for the modem IP: + * send a search message (2 bytes) via UDP to port UdpBCport + * if a modem receives this message, it returns with an + * UDP message to UdpBCport containing a String with it's IP address + */ + + private void search_modem() + { + Udp.UdpBCsend(new Byte[] { (Byte)0x3c }, GetMyBroadcastIP(), statics.UdpBCport_AppToModem); + + Udp.searchtimeout++; + if (Udp.searchtimeout >= 3) + statics.ModemIP = "1.2.3.4"; + } + + private void bt_file_ascii_Click(object sender, EventArgs e) + { + OpenFileDialog open = new OpenFileDialog(); + open.Filter = "Text Files(*.txt*; *.*)|*.txt; *.*"; + if (open.ShowDialog() == DialogResult.OK) + { + TXfilename = open.FileName; + TXRealFilename = open.SafeFileName; + String text = File.ReadAllText(TXfilename); + rtb_TXfile.Text = text; + txcommand = statics.AsciiFile; + // compress file + ZipStorer zs = new ZipStorer(); + zs.zipFile(statics.zip_TXtempfilename,open.SafeFileName,open.FileName); + + TXRealFileSize = statics.GetFileSize(statics.zip_TXtempfilename); + ShowTXstatus(); + } + } + + private void bt_file_send_Click(object sender, EventArgs e) + { + rtb_RXfile.Text = ""; + File.Delete(statics.zip_RXtempfilename); + + Byte[] textarr = File.ReadAllBytes(statics.zip_TXtempfilename); + ArraySend.Send(textarr, (Byte)txcommand, TXfilename, TXRealFilename); + } + + private void button2_Click(object sender, EventArgs e) + { + OpenFileDialog open = new OpenFileDialog(); + open.Filter = "HTML Files(*.html; *.htm; *.*)|*.html; *.htm; *.*"; + if (open.ShowDialog() == DialogResult.OK) + { + TXfilename = open.FileName; + TXRealFilename = open.SafeFileName; + String text = File.ReadAllText(TXfilename); + rtb_TXfile.Text = text; + txcommand = statics.HTMLFile; + // compress file + ZipStorer zs = new ZipStorer(); + zs.zipFile(statics.zip_TXtempfilename, open.SafeFileName, open.FileName); + + TXRealFileSize = statics.GetFileSize(statics.zip_TXtempfilename); + ShowTXstatus(); + } + } + + private void bt_sendBinaryFile_Click(object sender, EventArgs e) + { + OpenFileDialog open = new OpenFileDialog(); + open.Filter = "All Files(*.*)|*.*"; + if (open.ShowDialog() == DialogResult.OK) + { + TXfilename = open.FileName; + TXRealFilename = open.SafeFileName; + rtb_TXfile.Text = "Binary file " + TXfilename + " loaded"; + txcommand = statics.BinaryFile; + // compress file + ZipStorer zs = new ZipStorer(); + zs.zipFile(statics.zip_TXtempfilename, open.SafeFileName, open.FileName); + + TXRealFileSize = statics.GetFileSize(statics.zip_TXtempfilename); + ShowTXstatus(); + } + } + + private void comboBox1_SelectedIndexChanged(object sender, EventArgs e) + { + int idx = cb_speed.SelectedIndex; + int real_rate=4000; + + switch (idx) + { + case 0: real_rate = 3000; break; + case 1: real_rate = 3150; break; + case 2: real_rate = 3675; break; + case 3: real_rate = 4000; break; + case 4: real_rate = 4410; break; + case 5: real_rate = 4800; break; + case 6: real_rate = 5525; break; + case 7: real_rate = 6000; break; + } + + statics.setDatarate(real_rate); + + Byte[] txdata = new byte[statics.PayloadLen + 2]; + txdata[0] = (Byte)statics.ResamplingRate; // BER Test Marker + txdata[1] = (Byte)idx; + + // and send info to modem + Udp.UdpSend(txdata); + + //txcommand = statics.noTX; + // stop any ongoing transmission + button_cancelimg_Click(null, null); + } + + + private void timer_searchmodem_Tick(object sender, EventArgs e) + { + search_modem(); + } + + private void bt_rximages_Click(object sender, EventArgs e) + { + if (statics.ostype == 0) + { + try + { + String s = "file://" + statics.getHomePath(statics.RXimageStorage, ""); + Process.Start(s); + } + catch (Exception ex) + { + Console.WriteLine(ex.ToString()); + } + } + else + { + try + { + Process.Start("xdg-open", statics.getHomePath(statics.RXimageStorage, "")); + } + catch (Exception ex) + { + Console.WriteLine(ex.ToString()); + } + } + + } + + /// + // TEST ONLY: tell modem to send a file + private void button1_Click(object sender, EventArgs e) + { + Byte[] txdata = new byte[statics.PayloadLen + 2]; + txdata[0] = (Byte)statics.AutosendFile; + + // and transmit it + Udp.UdpSend(txdata); + } + + private void bt_openrxfile_Click(object sender, EventArgs e) + { + if (statics.ostype == 0) + { + try + { + String s = "file://" + statics.getHomePath("", ""); + Process.Start(s); + } + catch (Exception ex) + { + Console.WriteLine(ex.ToString()); + } + } + else + { + try + { + Process.Start("xdg-open", statics.getHomePath("", "")); + } + catch (Exception ex) + { + Console.WriteLine(ex.ToString()); + } + } + } + + private String ReadString(StreamReader sr) + { + String s = sr.ReadLine(); + if (s != null) + { + return s; + } + return " "; + } + + private int ReadInt(StreamReader sr) + { + int v; + + try + { + String s = sr.ReadLine(); + if (s != null) + { + v = Convert.ToInt32(s); + return v; + } + } + catch { } + return 0; + } + + void load_Setup() + { + try + { + using (StreamReader sr = new StreamReader(statics.getHomePath("", "od.cfg"))) + { + tb_callsign.Text = ReadString(sr); + cb_speed.Text = ReadString(sr); + String s = ReadString(sr); + cb_stampcall.Checked = (s == "1"); + s = ReadString(sr); + cb_savegoodfiles.Checked = (s == "1"); + } + } + catch + { + tb_callsign.Text = ""; + cb_speed.Text = "4000 QPSK BW: 2400 Hz (default QO-100)"; + } + } + + void save_Setup() + { + try + { + using (StreamWriter sw = new StreamWriter(statics.getHomePath("", "od.cfg"))) + { + sw.WriteLine(tb_callsign.Text); + sw.WriteLine(cb_speed.Text); + sw.WriteLine(cb_stampcall.Checked?"1":"0"); + sw.WriteLine(cb_savegoodfiles.Checked ? "1" : "0"); + } + } + catch { } + } + + private void bt_shutdown_Click(object sender, EventArgs e) + { + DialogResult dr = MessageBox.Show("Do you want to shut down the Modem-Computer ?", "Shut Down Modem", MessageBoxButtons.YesNo); + if (dr == DialogResult.Yes) + { + Byte[] txdata = new byte[statics.PayloadLen + 2]; + txdata[0] = (Byte)statics.Modem_shutdown; + + // and transmit it + Udp.UdpSend(txdata); + + MessageBox.Show("Please wait abt. 1 minute before powering OFF the modem", "Shut Down Modem", MessageBoxButtons.OK); + } + } + } +} diff --git a/oscardata/oscardata/Form1.resx b/oscardata/oscardata/Form1.resx new file mode 100755 index 0000000..a7c4dc1 --- /dev/null +++ b/oscardata/oscardata/Form1.resx @@ -0,0 +1,212 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + text/microsoft-resx + + + 2.0 + + + System.Resources.ResXResourceReader, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 + + + System.Resources.ResXResourceWriter, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 + + + 17, 17 + + + 138, 17 + + + 255, 17 + + + 371, 17 + + + 482, 17 + + + + + AAABAAEAICAAAAEAIACoEAAAFgAAACgAAAAgAAAAQAAAAAEAIAAAAAAAABAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAD/gIAC5ZhVV+eZVbLmmVXL5plVwOWYVZ/lmVWK5ZhVV/+AgAIAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAA6JtVIeaZVc/mmFWu55dVNgAAAAAAAAAAAAAAAAAAAADjl1Ub5ZhUT+iX + URYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAOeXWCDnmVXl5ppUW9+fYAjmmVZu5ZhVx+aZVcDmmVWl55pUiOea + VWD/gEAE6KJdC/+AQAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/gIAC55lV0eWYVlninVga5plVz+aaVJfjl1UbAAAAAAAA + AAAAAAAA1apVBueaVzXVqlUGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOaaVlPmmlWr359gCOaZVc7mmlZl359QEOaZ + Vo/mmVXe5plVwOaZVYTmmFY+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF43 + NXheNzb/XTY2dgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA5ppVteiXVyznmVZr5plWmN+f + UBDmmVXU55lVqOiXURYAAAAAAAAAAAAAAAAAAAAAAAAAAICAgAR6enqjAAAAAAAAAAAAAAAAAAAAAAAA + AABdNjZ2Xjc2/143Nv9eNzb/XTY2dgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADlmFXHAAAAAOaa + VcninVga55pWkueZVagAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgIAEhISEvnNzc+cAAAAAAAAAAAAA + AAAAAAAAXTY2dl43Nv9eNzb/Xjc2/143Nv9eNzb/XTY2dgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOaa + VbUAAAAA5plVwwAAAADmmVXd6JdRFgAAAABdXV2kXV1dNAAAAAAAAAAAgICAAoqKisCQkJD/a2trvQAA + AAAAAAAAAAAAAF02NnZeNzb/Xjc2/143Nv9eNzb/Xjc2/143Nv9eNzb/XTY2dgAAAAAAAAAAAAAAAAAA + AAAAAAAA5ZhWlQAAAADlmFSpAAAAAOaZVcEAAAAAAAAAAFxcXDpeXl64XV1dfl9fX2aOjo7NoaGh/4qK + ivxoaGhRAAAAAAAAAAAAAAAAXjc2/3ZJSP94S0n/ekxL/3tNTP99T03/flBO/4BRUP9FMDD/Oy0ttQAA + AAAAAAAAAAAAAAAAAADnmVR/AAAAAOaaVY0AAAAA5plUhQAAAAAAAAAAAAAAAF9fX4EAAAAAkJCQ1rGx + sf+rq6v/cXFxxj4+PiFhYWHXcHBwywAAAABfODaJZTw7/21CQf9uQ0L/cEVD/3FGRf9zR0b/VTs6/ykp + Kf9CMC//XTY2dgAAAAAAAAAAAAAAAOWYVk3tklsO5ppVXdWqVQbmm1Q9AAAAAAAAAAAAAAAAX19fWZ+f + n9m/v7//wcHB/4GBgfFOTk5LT09P77CwsP9xcXH/b29vywAAAABfODaJYDg3/2M7Ov9kPDv/Zj08/0w1 + NP8pKSn/Qi8v/143Nv9eNzb/XTY2dgAAAAAAAAAA/4CAAueaVT//gEAE6JtXOAAAAAAAAAAAAAAAAAAA + AACoqKjCz8/P/9TU1P+VlZX/T09P/0JCQvrS0tL/rq6u/46Ojv9xcXH/b29vywAAAABfODaJXjc2/143 + Nv9FMTD/KSkp/0IvL/9eNzb/Xjc2/143Nv9eNzb/XTY2dgAAAAAAAAAA7ZJbDv+AQATfn2AIAAAAAAAA + AAAAAAAApaWlvtfX1//k5OT/oaGh8ZmZmf9qamr+2dnZ/7m5uf/AwMD/qamp/46Ojv9xcXH/b29vywAA + AABfODaJRTEw/ykpKf9CLy//Xjc2/143Nv9eNzb/Xjc2/143Nv9eNzb/XTY2dgAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAKamprTLy8v/zMzM9qysrJ1paWlEbGxs++Li4v/z8/P/3t7e/6urq//AwMD/qamp/46O + jv9xcXH/b29vyykpKXYpKSn/Qi8v/3ZJSP94S0n/ekxL/3tNTP99T03/flBO/4BRUP9eNzb/AAAAAAAA + AAAAAAAAAAAAAAAAAACsrKyfuLi427i4uJabm5spdXV1GICAgOzo6Oj///////n5+f/v7+//3t7e/6ur + q//AwMD/qqqq/46Ojv9xcXH/YGBg/ykpKYlfODaJZTw7/21CQf9uQ0L/cEVD/3FGRf9zR0b/akA//143 + N4cAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACOjo7L29vb/8nJyf/7+/v///////n5 + +f/v7+//3t7e/6urq//AwMD/qqqq/4+Pj/9ycnL/b29vygAAAABfODaJYDg3/2M7Ov9kPDv/Zj08/2M7 + Ov9fODaJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJeXl5qsrKz/1NTU/8XF + xf/7+/v///////n5+f/v7+//3t7e/6ysrP/AwMD/qqqq/4+Pj/9ycnL/b29vygAAAABfODaJXjc2/143 + Nv9eNzb/Xzg2iQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIKC + go2srKz/1NTU/8XFxf/7+/v///////n5+f/v7+//3t7e/6ysrP/AwMD/qqqq/4+Pj/9ycnL/b29vygAA + AABfODaJXjc2/184NokAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAXjc1eF43 + Nv9dNjZ2AAAAAIKCgo2srKz/1NTU/8XFxf/7+/v///////n5+f/v7+//3t7e/6ysrP/AwMD/qqqq/4+P + j/92dnbzAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF02 + NnZeNzb/Xjc2/143Nv9dNjZ2AAAAAIKCgo2srKz/1NTU/8XFxf/7+/v///////n5+f/v7+//3t7e/6ys + rP/AwMD/oKCg/4iIiIUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AABdNjZ2Xjc2/143Nv9eNzb/Xjc2/143Nv9dNjZ2AAAAAIKCgo2srKz/1NTU/8XFxf/7+/v///////n5 + +f/v7+//3t7e/6enp/+Pj4/NAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAXTY2dl43Nv9eNzb/Xjc2/143Nv9eNzb/Xjc2/143Nv9dNjZ2KSkpdltbW/+srKz/1NTU/8XF + xf/7+/v///////n5+f/p6en/lJSU6ElJST9RUVFCa2trjwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAABeNzb/dklI/3hLSf96TEv/e01M/31PTf9+UE7/gFFQ/0UxMP8pKSn/KSkpiYKC + go2srKz/1NTU/8XFxf/7+/v/9PT0/52dnelycnL1Nzc357CwsPCJiYn8YmJingAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84NollPDv/bUJB/25DQv9wRUP/cUZF/3NHRv9VOzr/KSkp/0Iv + L/9dNjZ2AAAAAIKCgo2srKz/1NTU/7y8vP+ioqLLc3NzWaCgoPS4uLjz1dXV/6urq/97e3v+Y2NjoAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84NolgODf/Yzs6/2Q8O/9mPTz/TDU0/ykp + Kf9CLy//Xjc2/143Nv9dNjZ2AAAAAIGBgY6ZmZnmnZ2dgAAAAACCgoI9xsbG9Pz8/P/r6+v/uLi4/6qq + qv96enr+Y2NjoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84NoleNzb/Xjc2/0Ux + MP8pKSn/Qi8v/143Nv9eNzb/Xjc2/143Nv9dNjZ2AAAAAAAAAAAAAAAAAAAAAIyMjHzT09P/29vb//r6 + +v/r6+v/t7e3/6ioqP94eHj+YmJioQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84 + NolFMTD/KSkp/0IvL/9eNzb/Xjc2/143Nv9eNzb/Xjc2/143Nv9dNjZ2AAAAAAAAAAAAAAAAAAAAAIKC + gonOzs7/29vb//r6+v/r6+v/tra2/6enp/92dnb+Y2NjoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAADsuLchCLy//dklI/3hLSf96TEv/e01M/31PTf9+UE7/gFFQ/143Nv8AAAAAAAAAAAAA + AAAAAAAAAAAAAIKCgoPOzs7/29vb//r6+v/r6+v/tra2/6enp/91dXX+aGhomwAAAAAAAAAAAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84NollPDv/bUJB/25DQv9wRUP/cUZF/3NHRv9qQD//Xjc3hwAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAISEhHbOzs7/29vb//r6+v/r6+v/tra2/6ampv94eHjoAAAAAAAA + AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84NolgODf/Yzs6/2Q8O/9mPTz/Yzs6/184 + NokAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAISEhHbOzs7/29vb//r6+v/r6+v/np6e/3Z2 + dn0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84NoleNzb/Xjc2/143 + Nv9fODaJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAISEhHTOzs7/29vb/8HB + wf+Dg4P+WVlZLgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAF84 + NoleNzb/Xzg2iQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAISE + hHS0tLTZh4eHhqysrDEAAAAA4A///8PH//+AA///Acf//wAf+P8A+fB/Q/HgP1JhwB9WAcAPV0BABwcA + IAMPABABjgAIAPwAAAD4AAAA/4ABAf+AAIP/wABH/iAAf/wQAH/4CAD/8AAAP/AAAB/wAQAP+ACIB/wA + eAP+ADwB/wA+AP+APwD/wH+A/+D/wP/x/+E= + + + \ No newline at end of file diff --git a/oscardata/oscardata/Program.cs b/oscardata/oscardata/Program.cs new file mode 100755 index 0000000..5cb6ed1 --- /dev/null +++ b/oscardata/oscardata/Program.cs @@ -0,0 +1,19 @@ +using System; +using System.Windows.Forms; + +namespace oscardata +{ + static class Program + { + /// + /// Der Haupteinstiegspunkt für die Anwendung. + /// + [STAThread] + static void Main() + { + Application.EnableVisualStyles(); + Application.SetCompatibleTextRenderingDefault(false); + Application.Run(new Form1()); + } + } +} diff --git a/oscardata/oscardata/Properties/AssemblyInfo.cs b/oscardata/oscardata/Properties/AssemblyInfo.cs new file mode 100755 index 0000000..0792239 --- /dev/null +++ b/oscardata/oscardata/Properties/AssemblyInfo.cs @@ -0,0 +1,36 @@ +using System.Reflection; +using System.Runtime.CompilerServices; +using System.Runtime.InteropServices; + +// Allgemeine Informationen über eine Assembly werden über die folgenden +// Attribute gesteuert. Ändern Sie diese Attributwerte, um die Informationen zu ändern, +// die einer Assembly zugeordnet sind. +[assembly: AssemblyTitle("oscardata")] +[assembly: AssemblyDescription("")] +[assembly: AssemblyConfiguration("")] +[assembly: AssemblyCompany("")] +[assembly: AssemblyProduct("oscardata")] +[assembly: AssemblyCopyright("Copyright © 2020")] +[assembly: AssemblyTrademark("")] +[assembly: AssemblyCulture("")] + +// Durch Festlegen von ComVisible auf FALSE werden die Typen in dieser Assembly +// für COM-Komponenten unsichtbar. Wenn Sie auf einen Typ in dieser Assembly von +// COM aus zugreifen müssen, sollten Sie das ComVisible-Attribut für diesen Typ auf "True" festlegen. +[assembly: ComVisible(false)] + +// Die folgende GUID bestimmt die ID der Typbibliothek, wenn dieses Projekt für COM verfügbar gemacht wird +[assembly: Guid("989bf5c6-36f6-4158-9fb2-42e86d2020db")] + +// Versionsinformationen für eine Assembly bestehen aus den folgenden vier Werten: +// +// Hauptversion +// Nebenversion +// Buildnummer +// Revision +// +// Sie können alle Werte angeben oder Standardwerte für die Build- und Revisionsnummern verwenden, +// übernehmen, indem Sie "*" eingeben: +// [assembly: AssemblyVersion("1.0.*")] +[assembly: AssemblyVersion("1.0.0.0")] +[assembly: AssemblyFileVersion("1.0.0.0")] diff --git a/oscardata/oscardata/Properties/Resources.Designer.cs b/oscardata/oscardata/Properties/Resources.Designer.cs new file mode 100755 index 0000000..0dc25c9 --- /dev/null +++ b/oscardata/oscardata/Properties/Resources.Designer.cs @@ -0,0 +1,93 @@ +//------------------------------------------------------------------------------ +// +// Dieser Code wurde von einem Tool generiert. +// Laufzeitversion:4.0.30319.42000 +// +// Änderungen an dieser Datei können falsches Verhalten verursachen und gehen verloren, wenn +// der Code erneut generiert wird. +// +//------------------------------------------------------------------------------ + +namespace oscardata.Properties { + using System; + + + /// + /// Eine stark typisierte Ressourcenklasse zum Suchen von lokalisierten Zeichenfolgen usw. + /// + // Diese Klasse wurde von der StronglyTypedResourceBuilder automatisch generiert + // -Klasse über ein Tool wie ResGen oder Visual Studio automatisch generiert. + // Um einen Member hinzuzufügen oder zu entfernen, bearbeiten Sie die .ResX-Datei und führen dann ResGen + // mit der /str-Option erneut aus, oder Sie erstellen Ihr VS-Projekt neu. + [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "16.0.0.0")] + [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] + [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] + internal class Resources { + + private static global::System.Resources.ResourceManager resourceMan; + + private static global::System.Globalization.CultureInfo resourceCulture; + + [global::System.Diagnostics.CodeAnalysis.SuppressMessageAttribute("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")] + internal Resources() { + } + + /// + /// Gibt die zwischengespeicherte ResourceManager-Instanz zurück, die von dieser Klasse verwendet wird. + /// + [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] + internal static global::System.Resources.ResourceManager ResourceManager { + get { + if (object.ReferenceEquals(resourceMan, null)) { + global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("oscardata.Properties.Resources", typeof(Resources).Assembly); + resourceMan = temp; + } + return resourceMan; + } + } + + /// + /// Überschreibt die CurrentUICulture-Eigenschaft des aktuellen Threads für alle + /// Ressourcenzuordnungen, die diese stark typisierte Ressourcenklasse verwenden. + /// + [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] + internal static global::System.Globalization.CultureInfo Culture { + get { + return resourceCulture; + } + set { + resourceCulture = value; + } + } + + /// + /// Sucht eine lokalisierte Ressource vom Typ System.Drawing.Bitmap. + /// + internal static System.Drawing.Bitmap constelBG { + get { + object obj = ResourceManager.GetObject("constelBG", resourceCulture); + return ((System.Drawing.Bitmap)(obj)); + } + } + + /// + /// Sucht eine lokalisierte Ressource vom Typ System.Drawing.Bitmap. + /// + internal static System.Drawing.Bitmap defaultpic { + get { + object obj = ResourceManager.GetObject("defaultpic", resourceCulture); + return ((System.Drawing.Bitmap)(obj)); + } + } + + /// + /// Sucht eine lokalisierte Ressource vom Typ System.Drawing.Bitmap. + /// + internal static System.Drawing.Bitmap Satellite_icon { + get { + object obj = ResourceManager.GetObject("Satellite_icon", resourceCulture); + return ((System.Drawing.Bitmap)(obj)); + } + } + } +} diff --git a/oscardata/oscardata/Properties/Resources.resx b/oscardata/oscardata/Properties/Resources.resx new file mode 100755 index 0000000..fb8a259 --- /dev/null +++ b/oscardata/oscardata/Properties/Resources.resx @@ -0,0 +1,130 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + text/microsoft-resx + + + 2.0 + + + System.Resources.ResXResourceReader, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 + + + System.Resources.ResXResourceWriter, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 + + + + defaultpic.png;System.Drawing.Bitmap, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a + + + Satellite-icon.png;System.Drawing.Bitmap, System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a + + + constelBG.png;System.Drawing.Bitmap, System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a + + \ No newline at end of file diff --git a/oscardata/oscardata/Properties/Satellite-icon.ico b/oscardata/oscardata/Properties/Satellite-icon.ico new file mode 100755 index 0000000..86160c1 Binary files /dev/null and b/oscardata/oscardata/Properties/Satellite-icon.ico differ diff --git a/oscardata/oscardata/Properties/Satellite-icon.png b/oscardata/oscardata/Properties/Satellite-icon.png new file mode 100755 index 0000000..77d6d9d Binary files /dev/null and b/oscardata/oscardata/Properties/Satellite-icon.png differ diff --git a/oscardata/oscardata/Properties/Settings.Designer.cs b/oscardata/oscardata/Properties/Settings.Designer.cs new file mode 100755 index 0000000..75b410f --- /dev/null +++ b/oscardata/oscardata/Properties/Settings.Designer.cs @@ -0,0 +1,26 @@ +//------------------------------------------------------------------------------ +// +// Dieser Code wurde von einem Tool generiert. +// Laufzeitversion:4.0.30319.42000 +// +// Änderungen an dieser Datei können falsches Verhalten verursachen und gehen verloren, wenn +// der Code erneut generiert wird. +// +//------------------------------------------------------------------------------ + +namespace oscardata.Properties { + + + [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] + [global::System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.VisualStudio.Editors.SettingsDesigner.SettingsSingleFileGenerator", "16.7.0.0")] + internal sealed partial class Settings : global::System.Configuration.ApplicationSettingsBase { + + private static Settings defaultInstance = ((Settings)(global::System.Configuration.ApplicationSettingsBase.Synchronized(new Settings()))); + + public static Settings Default { + get { + return defaultInstance; + } + } + } +} diff --git a/oscardata/oscardata/Properties/Settings.settings b/oscardata/oscardata/Properties/Settings.settings new file mode 100755 index 0000000..abf36c5 --- /dev/null +++ b/oscardata/oscardata/Properties/Settings.settings @@ -0,0 +1,7 @@ + + + + + + + diff --git a/oscardata/oscardata/Properties/constelBG.png b/oscardata/oscardata/Properties/constelBG.png new file mode 100644 index 0000000..f9ae886 Binary files /dev/null and b/oscardata/oscardata/Properties/constelBG.png differ diff --git a/oscardata/oscardata/Properties/defaultpic.png b/oscardata/oscardata/Properties/defaultpic.png new file mode 100755 index 0000000..e7ad04e Binary files /dev/null and b/oscardata/oscardata/Properties/defaultpic.png differ diff --git a/oscardata/oscardata/Satellite-icon.ico b/oscardata/oscardata/Satellite-icon.ico new file mode 100755 index 0000000..86160c1 Binary files /dev/null and b/oscardata/oscardata/Satellite-icon.ico differ diff --git a/oscardata/oscardata/bin/Debug/image.bin b/oscardata/oscardata/bin/Debug/image.bin new file mode 100755 index 0000000..fa451b8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/image.bin differ diff --git a/oscardata/oscardata/bin/Debug/oscardata.exe b/oscardata/oscardata/bin/Debug/oscardata.exe new file mode 100755 index 0000000..acfdc8a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/oscardata.exe differ diff --git a/oscardata/oscardata/bin/Debug/oscardata.exe.config b/oscardata/oscardata/bin/Debug/oscardata.exe.config new file mode 100755 index 0000000..e743be0 --- /dev/null +++ b/oscardata/oscardata/bin/Debug/oscardata.exe.config @@ -0,0 +1,6 @@ + + + + + + diff --git a/oscardata/oscardata/bin/Debug/oscardata.pdb b/oscardata/oscardata/bin/Debug/oscardata.pdb new file mode 100755 index 0000000..f151d47 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/oscardata.pdb differ diff --git a/oscardata/oscardata/bin/Debug/rxdata.jpg b/oscardata/oscardata/bin/Debug/rxdata.jpg new file mode 100755 index 0000000..3a7ca6e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/rxdata.jpg differ diff --git a/oscardata/oscardata/bin/Debug/rxtemp.zip b/oscardata/oscardata/bin/Debug/rxtemp.zip new file mode 100755 index 0000000..a2da016 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/rxtemp.zip differ diff --git a/oscardata/oscardata/bin/Debug/temp.jpg b/oscardata/oscardata/bin/Debug/temp.jpg new file mode 100755 index 0000000..d9f11c9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp.zip b/oscardata/oscardata/bin/Debug/temp.zip new file mode 100755 index 0000000..23c93ae Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp.zip differ diff --git a/oscardata/oscardata/bin/Debug/temp10.jpg b/oscardata/oscardata/bin/Debug/temp10.jpg new file mode 100755 index 0000000..a073b07 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp10.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp11.jpg b/oscardata/oscardata/bin/Debug/temp11.jpg new file mode 100755 index 0000000..e6cab15 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp11.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp12.jpg b/oscardata/oscardata/bin/Debug/temp12.jpg new file mode 100755 index 0000000..a05f6cb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp12.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp127.jpg b/oscardata/oscardata/bin/Debug/temp127.jpg new file mode 100755 index 0000000..5e7fc85 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp127.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp128.jpg b/oscardata/oscardata/bin/Debug/temp128.jpg new file mode 100755 index 0000000..d1b37dc Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp128.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp129.jpg b/oscardata/oscardata/bin/Debug/temp129.jpg new file mode 100755 index 0000000..cc5e4e1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp129.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp13.jpg b/oscardata/oscardata/bin/Debug/temp13.jpg new file mode 100755 index 0000000..5cf09f3 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp13.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp130.jpg b/oscardata/oscardata/bin/Debug/temp130.jpg new file mode 100755 index 0000000..b788052 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp130.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp131.jpg b/oscardata/oscardata/bin/Debug/temp131.jpg new file mode 100755 index 0000000..aebdf73 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp131.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp132.jpg b/oscardata/oscardata/bin/Debug/temp132.jpg new file mode 100755 index 0000000..e0545f0 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp132.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp133.jpg b/oscardata/oscardata/bin/Debug/temp133.jpg new file mode 100755 index 0000000..2e73c10 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp133.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp134.jpg b/oscardata/oscardata/bin/Debug/temp134.jpg new file mode 100755 index 0000000..b182a77 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp134.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp135.jpg b/oscardata/oscardata/bin/Debug/temp135.jpg new file mode 100755 index 0000000..00f0241 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp135.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp136.jpg b/oscardata/oscardata/bin/Debug/temp136.jpg new file mode 100755 index 0000000..2861d4b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp136.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp137.jpg b/oscardata/oscardata/bin/Debug/temp137.jpg new file mode 100755 index 0000000..da02da4 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp137.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp138.jpg b/oscardata/oscardata/bin/Debug/temp138.jpg new file mode 100755 index 0000000..f9bbf84 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp138.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp139.jpg b/oscardata/oscardata/bin/Debug/temp139.jpg new file mode 100755 index 0000000..d646935 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp139.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp14.jpg b/oscardata/oscardata/bin/Debug/temp14.jpg new file mode 100755 index 0000000..6521598 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp14.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp140.jpg b/oscardata/oscardata/bin/Debug/temp140.jpg new file mode 100755 index 0000000..87515ce Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp140.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp141.jpg b/oscardata/oscardata/bin/Debug/temp141.jpg new file mode 100755 index 0000000..e1ac8f9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp141.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp142.jpg b/oscardata/oscardata/bin/Debug/temp142.jpg new file mode 100755 index 0000000..90b4b8d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp142.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp143.jpg b/oscardata/oscardata/bin/Debug/temp143.jpg new file mode 100755 index 0000000..fe15329 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp143.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp144.jpg b/oscardata/oscardata/bin/Debug/temp144.jpg new file mode 100755 index 0000000..7b6c300 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp144.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp145.jpg b/oscardata/oscardata/bin/Debug/temp145.jpg new file mode 100755 index 0000000..438d586 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp145.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp146.jpg b/oscardata/oscardata/bin/Debug/temp146.jpg new file mode 100755 index 0000000..bc2f8a7 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp146.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp147.jpg b/oscardata/oscardata/bin/Debug/temp147.jpg new file mode 100755 index 0000000..0d9ab91 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp147.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp148.jpg b/oscardata/oscardata/bin/Debug/temp148.jpg new file mode 100755 index 0000000..6933c7f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp148.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp149.jpg b/oscardata/oscardata/bin/Debug/temp149.jpg new file mode 100755 index 0000000..1327748 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp149.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp15.jpg b/oscardata/oscardata/bin/Debug/temp15.jpg new file mode 100755 index 0000000..38043c5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp15.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp150.jpg b/oscardata/oscardata/bin/Debug/temp150.jpg new file mode 100755 index 0000000..2ebedbd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp150.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp151.jpg b/oscardata/oscardata/bin/Debug/temp151.jpg new file mode 100755 index 0000000..f2448ff Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp151.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp152.jpg b/oscardata/oscardata/bin/Debug/temp152.jpg new file mode 100755 index 0000000..a5ede9e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp152.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp153.jpg b/oscardata/oscardata/bin/Debug/temp153.jpg new file mode 100755 index 0000000..eb750f8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp153.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp154.jpg b/oscardata/oscardata/bin/Debug/temp154.jpg new file mode 100755 index 0000000..cb11fc6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp154.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp155.jpg b/oscardata/oscardata/bin/Debug/temp155.jpg new file mode 100755 index 0000000..b3c7d27 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp155.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp156.jpg b/oscardata/oscardata/bin/Debug/temp156.jpg new file mode 100755 index 0000000..2ee7da1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp156.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp157.jpg b/oscardata/oscardata/bin/Debug/temp157.jpg new file mode 100755 index 0000000..253129d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp157.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp158.jpg b/oscardata/oscardata/bin/Debug/temp158.jpg new file mode 100755 index 0000000..4d61fc9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp158.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp159.jpg b/oscardata/oscardata/bin/Debug/temp159.jpg new file mode 100755 index 0000000..ee0e411 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp159.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp16.jpg b/oscardata/oscardata/bin/Debug/temp16.jpg new file mode 100755 index 0000000..5ecea04 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp16.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp160.jpg b/oscardata/oscardata/bin/Debug/temp160.jpg new file mode 100755 index 0000000..f5a3d65 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp160.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp161.jpg b/oscardata/oscardata/bin/Debug/temp161.jpg new file mode 100755 index 0000000..7c6f6f1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp161.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp162.jpg b/oscardata/oscardata/bin/Debug/temp162.jpg new file mode 100755 index 0000000..b2b8889 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp162.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp163.jpg b/oscardata/oscardata/bin/Debug/temp163.jpg new file mode 100755 index 0000000..c620ed5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp163.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp164.jpg b/oscardata/oscardata/bin/Debug/temp164.jpg new file mode 100755 index 0000000..74751bf Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp164.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp165.jpg b/oscardata/oscardata/bin/Debug/temp165.jpg new file mode 100755 index 0000000..1c1e711 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp165.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp166.jpg b/oscardata/oscardata/bin/Debug/temp166.jpg new file mode 100755 index 0000000..5cd66ce Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp166.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp167.jpg b/oscardata/oscardata/bin/Debug/temp167.jpg new file mode 100755 index 0000000..33a9ad2 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp167.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp168.jpg b/oscardata/oscardata/bin/Debug/temp168.jpg new file mode 100755 index 0000000..f4a9a29 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp168.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp169.jpg b/oscardata/oscardata/bin/Debug/temp169.jpg new file mode 100755 index 0000000..00a0f6a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp169.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp17.jpg b/oscardata/oscardata/bin/Debug/temp17.jpg new file mode 100755 index 0000000..64adb91 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp17.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp170.jpg b/oscardata/oscardata/bin/Debug/temp170.jpg new file mode 100755 index 0000000..cd8e5c9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp170.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp171.jpg b/oscardata/oscardata/bin/Debug/temp171.jpg new file mode 100755 index 0000000..d02bbfb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp171.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp172.jpg b/oscardata/oscardata/bin/Debug/temp172.jpg new file mode 100755 index 0000000..e0aae34 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp172.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp173.jpg b/oscardata/oscardata/bin/Debug/temp173.jpg new file mode 100755 index 0000000..9ee3a08 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp173.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp174.jpg b/oscardata/oscardata/bin/Debug/temp174.jpg new file mode 100755 index 0000000..c1b9691 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp174.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp175.jpg b/oscardata/oscardata/bin/Debug/temp175.jpg new file mode 100755 index 0000000..46fac98 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp175.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp176.jpg b/oscardata/oscardata/bin/Debug/temp176.jpg new file mode 100755 index 0000000..fbf66e7 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp176.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp177.jpg b/oscardata/oscardata/bin/Debug/temp177.jpg new file mode 100755 index 0000000..179e4c9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp177.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp178.jpg b/oscardata/oscardata/bin/Debug/temp178.jpg new file mode 100755 index 0000000..b3e031a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp178.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp179.jpg b/oscardata/oscardata/bin/Debug/temp179.jpg new file mode 100755 index 0000000..cfb12bf Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp179.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp18.jpg b/oscardata/oscardata/bin/Debug/temp18.jpg new file mode 100755 index 0000000..6581f24 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp18.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp180.jpg b/oscardata/oscardata/bin/Debug/temp180.jpg new file mode 100755 index 0000000..aab35fa Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp180.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp181.jpg b/oscardata/oscardata/bin/Debug/temp181.jpg new file mode 100755 index 0000000..e196b60 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp181.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp182.jpg b/oscardata/oscardata/bin/Debug/temp182.jpg new file mode 100755 index 0000000..1f8c412 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp182.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp183.jpg b/oscardata/oscardata/bin/Debug/temp183.jpg new file mode 100755 index 0000000..fddc826 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp183.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp184.jpg b/oscardata/oscardata/bin/Debug/temp184.jpg new file mode 100755 index 0000000..97a3dcd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp184.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp185.jpg b/oscardata/oscardata/bin/Debug/temp185.jpg new file mode 100755 index 0000000..b07af37 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp185.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp186.jpg b/oscardata/oscardata/bin/Debug/temp186.jpg new file mode 100755 index 0000000..a36028f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp186.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp187.jpg b/oscardata/oscardata/bin/Debug/temp187.jpg new file mode 100755 index 0000000..cd13c2d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp187.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp188.jpg b/oscardata/oscardata/bin/Debug/temp188.jpg new file mode 100755 index 0000000..7f8c6dd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp188.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp189.jpg b/oscardata/oscardata/bin/Debug/temp189.jpg new file mode 100755 index 0000000..5b37c5e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp189.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp19.jpg b/oscardata/oscardata/bin/Debug/temp19.jpg new file mode 100755 index 0000000..065fa08 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp19.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp190.jpg b/oscardata/oscardata/bin/Debug/temp190.jpg new file mode 100755 index 0000000..25fcecc Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp190.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp191.jpg b/oscardata/oscardata/bin/Debug/temp191.jpg new file mode 100755 index 0000000..51589ac Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp191.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp192.jpg b/oscardata/oscardata/bin/Debug/temp192.jpg new file mode 100755 index 0000000..a9766c1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp192.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp193.jpg b/oscardata/oscardata/bin/Debug/temp193.jpg new file mode 100755 index 0000000..82e40cb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp193.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp194.jpg b/oscardata/oscardata/bin/Debug/temp194.jpg new file mode 100755 index 0000000..1561de0 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp194.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp195.jpg b/oscardata/oscardata/bin/Debug/temp195.jpg new file mode 100755 index 0000000..51168c7 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp195.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp196.jpg b/oscardata/oscardata/bin/Debug/temp196.jpg new file mode 100755 index 0000000..69e691a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp196.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp197.jpg b/oscardata/oscardata/bin/Debug/temp197.jpg new file mode 100755 index 0000000..f57f6e1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp197.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp198.jpg b/oscardata/oscardata/bin/Debug/temp198.jpg new file mode 100755 index 0000000..9879386 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp198.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp199.jpg b/oscardata/oscardata/bin/Debug/temp199.jpg new file mode 100755 index 0000000..caad10e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp199.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp20.jpg b/oscardata/oscardata/bin/Debug/temp20.jpg new file mode 100755 index 0000000..d832ee7 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp20.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp200.jpg b/oscardata/oscardata/bin/Debug/temp200.jpg new file mode 100755 index 0000000..d9e12af Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp200.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp201.jpg b/oscardata/oscardata/bin/Debug/temp201.jpg new file mode 100755 index 0000000..dba5f08 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp201.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp202.jpg b/oscardata/oscardata/bin/Debug/temp202.jpg new file mode 100755 index 0000000..eb916a9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp202.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp203.jpg b/oscardata/oscardata/bin/Debug/temp203.jpg new file mode 100755 index 0000000..5d1a1ed Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp203.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp204.jpg b/oscardata/oscardata/bin/Debug/temp204.jpg new file mode 100755 index 0000000..afc4e1c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp204.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp205.jpg b/oscardata/oscardata/bin/Debug/temp205.jpg new file mode 100755 index 0000000..b1d4e9a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp205.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp206.jpg b/oscardata/oscardata/bin/Debug/temp206.jpg new file mode 100755 index 0000000..17ba48b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp206.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp207.jpg b/oscardata/oscardata/bin/Debug/temp207.jpg new file mode 100755 index 0000000..ce9c94f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp207.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp208.jpg b/oscardata/oscardata/bin/Debug/temp208.jpg new file mode 100755 index 0000000..4f38eef Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp208.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp209.jpg b/oscardata/oscardata/bin/Debug/temp209.jpg new file mode 100755 index 0000000..9d86bf5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp209.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp210.jpg b/oscardata/oscardata/bin/Debug/temp210.jpg new file mode 100755 index 0000000..e79bc3b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp210.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp211.jpg b/oscardata/oscardata/bin/Debug/temp211.jpg new file mode 100755 index 0000000..ba91e72 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp211.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp212.jpg b/oscardata/oscardata/bin/Debug/temp212.jpg new file mode 100755 index 0000000..5073053 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp212.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp213.jpg b/oscardata/oscardata/bin/Debug/temp213.jpg new file mode 100755 index 0000000..5f4c300 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp213.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp214.jpg b/oscardata/oscardata/bin/Debug/temp214.jpg new file mode 100755 index 0000000..eee4106 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp214.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp215.jpg b/oscardata/oscardata/bin/Debug/temp215.jpg new file mode 100755 index 0000000..880a3e9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp215.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp216.jpg b/oscardata/oscardata/bin/Debug/temp216.jpg new file mode 100755 index 0000000..041f883 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp216.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp217.jpg b/oscardata/oscardata/bin/Debug/temp217.jpg new file mode 100755 index 0000000..d8e3a4d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp217.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp218.jpg b/oscardata/oscardata/bin/Debug/temp218.jpg new file mode 100755 index 0000000..a4bb318 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp218.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp219.jpg b/oscardata/oscardata/bin/Debug/temp219.jpg new file mode 100755 index 0000000..ff5ac48 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp219.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp220.jpg b/oscardata/oscardata/bin/Debug/temp220.jpg new file mode 100755 index 0000000..7210b56 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp220.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp221.jpg b/oscardata/oscardata/bin/Debug/temp221.jpg new file mode 100755 index 0000000..f3462a0 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp221.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp222.jpg b/oscardata/oscardata/bin/Debug/temp222.jpg new file mode 100755 index 0000000..943bdc8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp222.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp223.jpg b/oscardata/oscardata/bin/Debug/temp223.jpg new file mode 100755 index 0000000..0920add Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp223.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp224.jpg b/oscardata/oscardata/bin/Debug/temp224.jpg new file mode 100755 index 0000000..ae26389 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp224.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp225.jpg b/oscardata/oscardata/bin/Debug/temp225.jpg new file mode 100755 index 0000000..7a7e9ab Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp225.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp226.jpg b/oscardata/oscardata/bin/Debug/temp226.jpg new file mode 100755 index 0000000..cd834ec Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp226.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp227.jpg b/oscardata/oscardata/bin/Debug/temp227.jpg new file mode 100755 index 0000000..ff6e1b4 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp227.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp228.jpg b/oscardata/oscardata/bin/Debug/temp228.jpg new file mode 100755 index 0000000..01d2623 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp228.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp229.jpg b/oscardata/oscardata/bin/Debug/temp229.jpg new file mode 100755 index 0000000..29ddfe8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp229.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp230.jpg b/oscardata/oscardata/bin/Debug/temp230.jpg new file mode 100755 index 0000000..d3105d6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp230.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp231.jpg b/oscardata/oscardata/bin/Debug/temp231.jpg new file mode 100755 index 0000000..d950e57 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp231.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp232.jpg b/oscardata/oscardata/bin/Debug/temp232.jpg new file mode 100755 index 0000000..01daa95 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp232.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp233.jpg b/oscardata/oscardata/bin/Debug/temp233.jpg new file mode 100755 index 0000000..fd016d6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp233.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp234.jpg b/oscardata/oscardata/bin/Debug/temp234.jpg new file mode 100755 index 0000000..ca567bf Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp234.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp235.jpg b/oscardata/oscardata/bin/Debug/temp235.jpg new file mode 100755 index 0000000..7eea044 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp235.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp236.jpg b/oscardata/oscardata/bin/Debug/temp236.jpg new file mode 100755 index 0000000..941b9cd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp236.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp237.jpg b/oscardata/oscardata/bin/Debug/temp237.jpg new file mode 100755 index 0000000..e3e7a09 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp237.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp238.jpg b/oscardata/oscardata/bin/Debug/temp238.jpg new file mode 100755 index 0000000..80c79ec Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp238.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp239.jpg b/oscardata/oscardata/bin/Debug/temp239.jpg new file mode 100755 index 0000000..b342633 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp239.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp240.jpg b/oscardata/oscardata/bin/Debug/temp240.jpg new file mode 100755 index 0000000..a25fdfb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp240.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp241.jpg b/oscardata/oscardata/bin/Debug/temp241.jpg new file mode 100755 index 0000000..a479d8a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp241.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp242.jpg b/oscardata/oscardata/bin/Debug/temp242.jpg new file mode 100755 index 0000000..1e0a434 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp242.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp243.jpg b/oscardata/oscardata/bin/Debug/temp243.jpg new file mode 100755 index 0000000..4a0b8df Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp243.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp244.jpg b/oscardata/oscardata/bin/Debug/temp244.jpg new file mode 100755 index 0000000..7c3abca Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp244.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp245.jpg b/oscardata/oscardata/bin/Debug/temp245.jpg new file mode 100755 index 0000000..8b1467f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp245.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp246.jpg b/oscardata/oscardata/bin/Debug/temp246.jpg new file mode 100755 index 0000000..fc02efb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp246.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp247.jpg b/oscardata/oscardata/bin/Debug/temp247.jpg new file mode 100755 index 0000000..a118dde Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp247.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp248.jpg b/oscardata/oscardata/bin/Debug/temp248.jpg new file mode 100755 index 0000000..6a73072 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp248.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp249.jpg b/oscardata/oscardata/bin/Debug/temp249.jpg new file mode 100755 index 0000000..1659590 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp249.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp250.jpg b/oscardata/oscardata/bin/Debug/temp250.jpg new file mode 100755 index 0000000..f44f287 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp250.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp251.jpg b/oscardata/oscardata/bin/Debug/temp251.jpg new file mode 100755 index 0000000..ffc1680 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp251.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp252.jpg b/oscardata/oscardata/bin/Debug/temp252.jpg new file mode 100755 index 0000000..0b22509 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp252.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp253.jpg b/oscardata/oscardata/bin/Debug/temp253.jpg new file mode 100755 index 0000000..563d1ee Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp253.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp254.jpg b/oscardata/oscardata/bin/Debug/temp254.jpg new file mode 100755 index 0000000..ebc7253 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp254.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp255.jpg b/oscardata/oscardata/bin/Debug/temp255.jpg new file mode 100755 index 0000000..4318ee5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp255.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp256.jpg b/oscardata/oscardata/bin/Debug/temp256.jpg new file mode 100755 index 0000000..aa44fd7 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp256.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp257.jpg b/oscardata/oscardata/bin/Debug/temp257.jpg new file mode 100755 index 0000000..8c634ab Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp257.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp258.jpg b/oscardata/oscardata/bin/Debug/temp258.jpg new file mode 100755 index 0000000..0960685 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp258.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp259.jpg b/oscardata/oscardata/bin/Debug/temp259.jpg new file mode 100755 index 0000000..cdaf423 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp259.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp260.jpg b/oscardata/oscardata/bin/Debug/temp260.jpg new file mode 100755 index 0000000..5db9b89 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp260.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp261.jpg b/oscardata/oscardata/bin/Debug/temp261.jpg new file mode 100755 index 0000000..130647f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp261.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp262.jpg b/oscardata/oscardata/bin/Debug/temp262.jpg new file mode 100755 index 0000000..f2c2cd0 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp262.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp263.jpg b/oscardata/oscardata/bin/Debug/temp263.jpg new file mode 100755 index 0000000..f3503de Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp263.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp264.jpg b/oscardata/oscardata/bin/Debug/temp264.jpg new file mode 100755 index 0000000..865247a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp264.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp265.jpg b/oscardata/oscardata/bin/Debug/temp265.jpg new file mode 100755 index 0000000..fd430ed Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp265.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp266.jpg b/oscardata/oscardata/bin/Debug/temp266.jpg new file mode 100755 index 0000000..a5ed5b1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp266.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp267.jpg b/oscardata/oscardata/bin/Debug/temp267.jpg new file mode 100755 index 0000000..1cc17dd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp267.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp268.jpg b/oscardata/oscardata/bin/Debug/temp268.jpg new file mode 100755 index 0000000..ca0bc30 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp268.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp269.jpg b/oscardata/oscardata/bin/Debug/temp269.jpg new file mode 100755 index 0000000..3be46e3 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp269.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp27.jpg b/oscardata/oscardata/bin/Debug/temp27.jpg new file mode 100755 index 0000000..b6a014e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp27.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp270.jpg b/oscardata/oscardata/bin/Debug/temp270.jpg new file mode 100755 index 0000000..434fe2b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp270.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp271.jpg b/oscardata/oscardata/bin/Debug/temp271.jpg new file mode 100755 index 0000000..814413d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp271.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp272.jpg b/oscardata/oscardata/bin/Debug/temp272.jpg new file mode 100755 index 0000000..86bcd9a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp272.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp273.jpg b/oscardata/oscardata/bin/Debug/temp273.jpg new file mode 100755 index 0000000..4865e5c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp273.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp274.jpg b/oscardata/oscardata/bin/Debug/temp274.jpg new file mode 100755 index 0000000..43fec5f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp274.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp275.jpg b/oscardata/oscardata/bin/Debug/temp275.jpg new file mode 100755 index 0000000..f9ac983 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp275.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp276.jpg b/oscardata/oscardata/bin/Debug/temp276.jpg new file mode 100755 index 0000000..31b54e9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp276.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp277.jpg b/oscardata/oscardata/bin/Debug/temp277.jpg new file mode 100755 index 0000000..50feb6b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp277.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp278.jpg b/oscardata/oscardata/bin/Debug/temp278.jpg new file mode 100755 index 0000000..5ae9cf4 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp278.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp279.jpg b/oscardata/oscardata/bin/Debug/temp279.jpg new file mode 100755 index 0000000..e539c23 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp279.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp28.jpg b/oscardata/oscardata/bin/Debug/temp28.jpg new file mode 100755 index 0000000..829b10d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp28.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp280.jpg b/oscardata/oscardata/bin/Debug/temp280.jpg new file mode 100755 index 0000000..713c9e7 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp280.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp281.jpg b/oscardata/oscardata/bin/Debug/temp281.jpg new file mode 100755 index 0000000..02558ea Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp281.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp282.jpg b/oscardata/oscardata/bin/Debug/temp282.jpg new file mode 100755 index 0000000..a4738b6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp282.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp283.jpg b/oscardata/oscardata/bin/Debug/temp283.jpg new file mode 100755 index 0000000..823eb48 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp283.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp284.jpg b/oscardata/oscardata/bin/Debug/temp284.jpg new file mode 100755 index 0000000..e21d858 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp284.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp285.jpg b/oscardata/oscardata/bin/Debug/temp285.jpg new file mode 100755 index 0000000..77d4c6f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp285.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp286.jpg b/oscardata/oscardata/bin/Debug/temp286.jpg new file mode 100755 index 0000000..d8f0184 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp286.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp287.jpg b/oscardata/oscardata/bin/Debug/temp287.jpg new file mode 100755 index 0000000..c3120c5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp287.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp288.jpg b/oscardata/oscardata/bin/Debug/temp288.jpg new file mode 100755 index 0000000..9ab76ae Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp288.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp289.jpg b/oscardata/oscardata/bin/Debug/temp289.jpg new file mode 100755 index 0000000..2a3fc46 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp289.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp29.jpg b/oscardata/oscardata/bin/Debug/temp29.jpg new file mode 100755 index 0000000..c0bb110 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp29.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp290.jpg b/oscardata/oscardata/bin/Debug/temp290.jpg new file mode 100755 index 0000000..3a84399 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp290.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp291.jpg b/oscardata/oscardata/bin/Debug/temp291.jpg new file mode 100755 index 0000000..cd28fe9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp291.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp292.jpg b/oscardata/oscardata/bin/Debug/temp292.jpg new file mode 100755 index 0000000..322219b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp292.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp293.jpg b/oscardata/oscardata/bin/Debug/temp293.jpg new file mode 100755 index 0000000..48213af Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp293.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp294.jpg b/oscardata/oscardata/bin/Debug/temp294.jpg new file mode 100755 index 0000000..8a2954b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp294.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp295.jpg b/oscardata/oscardata/bin/Debug/temp295.jpg new file mode 100755 index 0000000..dd9208f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp295.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp296.jpg b/oscardata/oscardata/bin/Debug/temp296.jpg new file mode 100755 index 0000000..22005fa Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp296.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp297.jpg b/oscardata/oscardata/bin/Debug/temp297.jpg new file mode 100755 index 0000000..4d670b0 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp297.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp298.jpg b/oscardata/oscardata/bin/Debug/temp298.jpg new file mode 100755 index 0000000..f05aee0 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp298.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp299.jpg b/oscardata/oscardata/bin/Debug/temp299.jpg new file mode 100755 index 0000000..c9782df Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp299.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp3.jpg b/oscardata/oscardata/bin/Debug/temp3.jpg new file mode 100755 index 0000000..a94df53 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp3.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp30.jpg b/oscardata/oscardata/bin/Debug/temp30.jpg new file mode 100755 index 0000000..faad667 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp30.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp300.jpg b/oscardata/oscardata/bin/Debug/temp300.jpg new file mode 100755 index 0000000..a6dbe59 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp300.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp301.jpg b/oscardata/oscardata/bin/Debug/temp301.jpg new file mode 100755 index 0000000..ff4941d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp301.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp302.jpg b/oscardata/oscardata/bin/Debug/temp302.jpg new file mode 100755 index 0000000..b9ac4ad Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp302.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp303.jpg b/oscardata/oscardata/bin/Debug/temp303.jpg new file mode 100755 index 0000000..9eb51f5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp303.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp304.jpg b/oscardata/oscardata/bin/Debug/temp304.jpg new file mode 100755 index 0000000..f1536cb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp304.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp305.jpg b/oscardata/oscardata/bin/Debug/temp305.jpg new file mode 100755 index 0000000..91b692e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp305.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp306.jpg b/oscardata/oscardata/bin/Debug/temp306.jpg new file mode 100755 index 0000000..7b4ea97 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp306.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp307.jpg b/oscardata/oscardata/bin/Debug/temp307.jpg new file mode 100755 index 0000000..a1d5720 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp307.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp308.jpg b/oscardata/oscardata/bin/Debug/temp308.jpg new file mode 100755 index 0000000..0e1198c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp308.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp309.jpg b/oscardata/oscardata/bin/Debug/temp309.jpg new file mode 100755 index 0000000..f3642fd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp309.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp31.jpg b/oscardata/oscardata/bin/Debug/temp31.jpg new file mode 100755 index 0000000..ad461bf Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp31.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp310.jpg b/oscardata/oscardata/bin/Debug/temp310.jpg new file mode 100755 index 0000000..2d99317 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp310.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp311.jpg b/oscardata/oscardata/bin/Debug/temp311.jpg new file mode 100755 index 0000000..33ef407 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp311.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp312.jpg b/oscardata/oscardata/bin/Debug/temp312.jpg new file mode 100755 index 0000000..83e9e5c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp312.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp313.jpg b/oscardata/oscardata/bin/Debug/temp313.jpg new file mode 100755 index 0000000..713d66a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp313.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp314.jpg b/oscardata/oscardata/bin/Debug/temp314.jpg new file mode 100755 index 0000000..31a31a6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp314.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp315.jpg b/oscardata/oscardata/bin/Debug/temp315.jpg new file mode 100755 index 0000000..f641317 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp315.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp316.jpg b/oscardata/oscardata/bin/Debug/temp316.jpg new file mode 100755 index 0000000..f980de2 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp316.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp317.jpg b/oscardata/oscardata/bin/Debug/temp317.jpg new file mode 100755 index 0000000..b133360 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp317.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp318.jpg b/oscardata/oscardata/bin/Debug/temp318.jpg new file mode 100755 index 0000000..7ca2c09 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp318.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp319.jpg b/oscardata/oscardata/bin/Debug/temp319.jpg new file mode 100755 index 0000000..76c75e5 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp319.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp32.jpg b/oscardata/oscardata/bin/Debug/temp32.jpg new file mode 100755 index 0000000..36bd4a2 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp32.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp320.jpg b/oscardata/oscardata/bin/Debug/temp320.jpg new file mode 100755 index 0000000..f239b8c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp320.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp321.jpg b/oscardata/oscardata/bin/Debug/temp321.jpg new file mode 100755 index 0000000..d32093f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp321.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp322.jpg b/oscardata/oscardata/bin/Debug/temp322.jpg new file mode 100755 index 0000000..1e255c6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp322.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp323.jpg b/oscardata/oscardata/bin/Debug/temp323.jpg new file mode 100755 index 0000000..8551a01 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp323.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp324.jpg b/oscardata/oscardata/bin/Debug/temp324.jpg new file mode 100755 index 0000000..3a558f6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp324.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp325.jpg b/oscardata/oscardata/bin/Debug/temp325.jpg new file mode 100755 index 0000000..4f6d124 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp325.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp326.jpg b/oscardata/oscardata/bin/Debug/temp326.jpg new file mode 100755 index 0000000..ac3bc09 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp326.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp327.jpg b/oscardata/oscardata/bin/Debug/temp327.jpg new file mode 100755 index 0000000..b0e039d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp327.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp328.jpg b/oscardata/oscardata/bin/Debug/temp328.jpg new file mode 100755 index 0000000..a62eb86 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp328.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp329.jpg b/oscardata/oscardata/bin/Debug/temp329.jpg new file mode 100755 index 0000000..73578f6 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp329.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp33.jpg b/oscardata/oscardata/bin/Debug/temp33.jpg new file mode 100755 index 0000000..158a29b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp33.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp34.jpg b/oscardata/oscardata/bin/Debug/temp34.jpg new file mode 100755 index 0000000..c8b2c29 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp34.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp35.jpg b/oscardata/oscardata/bin/Debug/temp35.jpg new file mode 100755 index 0000000..3e15f14 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp35.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp36.jpg b/oscardata/oscardata/bin/Debug/temp36.jpg new file mode 100755 index 0000000..95a9286 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp36.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp37.jpg b/oscardata/oscardata/bin/Debug/temp37.jpg new file mode 100755 index 0000000..e29cf4e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp37.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp38.jpg b/oscardata/oscardata/bin/Debug/temp38.jpg new file mode 100755 index 0000000..c346f03 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp38.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp39.jpg b/oscardata/oscardata/bin/Debug/temp39.jpg new file mode 100755 index 0000000..01aef87 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp39.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp4.jpg b/oscardata/oscardata/bin/Debug/temp4.jpg new file mode 100755 index 0000000..4c8c7af Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp4.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp43.jpg b/oscardata/oscardata/bin/Debug/temp43.jpg new file mode 100755 index 0000000..44a7990 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp43.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp48.jpg b/oscardata/oscardata/bin/Debug/temp48.jpg new file mode 100755 index 0000000..163da3c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp48.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp49.jpg b/oscardata/oscardata/bin/Debug/temp49.jpg new file mode 100755 index 0000000..f268984 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp49.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp5.jpg b/oscardata/oscardata/bin/Debug/temp5.jpg new file mode 100755 index 0000000..3db79af Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp5.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp50.jpg b/oscardata/oscardata/bin/Debug/temp50.jpg new file mode 100755 index 0000000..0e446de Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp50.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp51.jpg b/oscardata/oscardata/bin/Debug/temp51.jpg new file mode 100755 index 0000000..8b9a25e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp51.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp52.jpg b/oscardata/oscardata/bin/Debug/temp52.jpg new file mode 100755 index 0000000..58f5087 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp52.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp53.jpg b/oscardata/oscardata/bin/Debug/temp53.jpg new file mode 100755 index 0000000..fa80178 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp53.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp54.jpg b/oscardata/oscardata/bin/Debug/temp54.jpg new file mode 100755 index 0000000..092499a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp54.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp56.jpg b/oscardata/oscardata/bin/Debug/temp56.jpg new file mode 100755 index 0000000..81ea57b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp56.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp6.jpg b/oscardata/oscardata/bin/Debug/temp6.jpg new file mode 100755 index 0000000..b50f383 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp6.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp62.jpg b/oscardata/oscardata/bin/Debug/temp62.jpg new file mode 100755 index 0000000..53dc8fd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp62.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp63.jpg b/oscardata/oscardata/bin/Debug/temp63.jpg new file mode 100755 index 0000000..2f91b46 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp63.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp64.jpg b/oscardata/oscardata/bin/Debug/temp64.jpg new file mode 100755 index 0000000..300ebd9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp64.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp7.jpg b/oscardata/oscardata/bin/Debug/temp7.jpg new file mode 100755 index 0000000..0e63e4c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp7.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp8.jpg b/oscardata/oscardata/bin/Debug/temp8.jpg new file mode 100755 index 0000000..6e3f7c4 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp8.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp85.jpg b/oscardata/oscardata/bin/Debug/temp85.jpg new file mode 100755 index 0000000..57f60f9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp85.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp86.jpg b/oscardata/oscardata/bin/Debug/temp86.jpg new file mode 100755 index 0000000..673b305 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp86.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp87.jpg b/oscardata/oscardata/bin/Debug/temp87.jpg new file mode 100755 index 0000000..5f360be Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp87.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp88.jpg b/oscardata/oscardata/bin/Debug/temp88.jpg new file mode 100755 index 0000000..725f20b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp88.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp89.jpg b/oscardata/oscardata/bin/Debug/temp89.jpg new file mode 100755 index 0000000..2d12841 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp89.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp9.jpg b/oscardata/oscardata/bin/Debug/temp9.jpg new file mode 100755 index 0000000..8b8948f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp9.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp90.jpg b/oscardata/oscardata/bin/Debug/temp90.jpg new file mode 100755 index 0000000..02327c2 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp90.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp91.jpg b/oscardata/oscardata/bin/Debug/temp91.jpg new file mode 100755 index 0000000..df8c96c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp91.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp92.jpg b/oscardata/oscardata/bin/Debug/temp92.jpg new file mode 100755 index 0000000..fea887c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp92.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp93.jpg b/oscardata/oscardata/bin/Debug/temp93.jpg new file mode 100755 index 0000000..a2d7602 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp93.jpg differ diff --git a/oscardata/oscardata/bin/Debug/temp94.jpg b/oscardata/oscardata/bin/Debug/temp94.jpg new file mode 100755 index 0000000..0b8971e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/temp94.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX10261.jpg b/oscardata/oscardata/bin/Debug/tempTX10261.jpg new file mode 100755 index 0000000..a853568 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX10261.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX10275.jpg b/oscardata/oscardata/bin/Debug/tempTX10275.jpg new file mode 100755 index 0000000..4f52bfb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX10275.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX10974.jpg b/oscardata/oscardata/bin/Debug/tempTX10974.jpg new file mode 100755 index 0000000..c9cefa8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX10974.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX11606.jpg b/oscardata/oscardata/bin/Debug/tempTX11606.jpg new file mode 100755 index 0000000..b82d805 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX11606.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX11871.jpg b/oscardata/oscardata/bin/Debug/tempTX11871.jpg new file mode 100755 index 0000000..1aaa6e2 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX11871.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX12258.jpg b/oscardata/oscardata/bin/Debug/tempTX12258.jpg new file mode 100755 index 0000000..5535f3c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX12258.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX13349.jpg b/oscardata/oscardata/bin/Debug/tempTX13349.jpg new file mode 100755 index 0000000..2b59b51 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX13349.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX14075.jpg b/oscardata/oscardata/bin/Debug/tempTX14075.jpg new file mode 100755 index 0000000..f26a2fb Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX14075.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX15059.jpg b/oscardata/oscardata/bin/Debug/tempTX15059.jpg new file mode 100755 index 0000000..1fa887e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX15059.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX15327.jpg b/oscardata/oscardata/bin/Debug/tempTX15327.jpg new file mode 100755 index 0000000..6ff8b63 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX15327.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX15844.jpg b/oscardata/oscardata/bin/Debug/tempTX15844.jpg new file mode 100755 index 0000000..3c0dcbf Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX15844.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX16354.jpg b/oscardata/oscardata/bin/Debug/tempTX16354.jpg new file mode 100755 index 0000000..b873b04 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX16354.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX16823.jpg b/oscardata/oscardata/bin/Debug/tempTX16823.jpg new file mode 100755 index 0000000..bb5743c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX16823.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX17827.jpg b/oscardata/oscardata/bin/Debug/tempTX17827.jpg new file mode 100755 index 0000000..ebb93d8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX17827.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX18806.jpg b/oscardata/oscardata/bin/Debug/tempTX18806.jpg new file mode 100755 index 0000000..099453d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX18806.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX19324.jpg b/oscardata/oscardata/bin/Debug/tempTX19324.jpg new file mode 100755 index 0000000..5957196 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX19324.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX20402.jpg b/oscardata/oscardata/bin/Debug/tempTX20402.jpg new file mode 100755 index 0000000..dc6b440 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX20402.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX22098.jpg b/oscardata/oscardata/bin/Debug/tempTX22098.jpg new file mode 100755 index 0000000..ee5a6f3 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX22098.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX22283.jpg b/oscardata/oscardata/bin/Debug/tempTX22283.jpg new file mode 100755 index 0000000..7afcd13 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX22283.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX23069.jpg b/oscardata/oscardata/bin/Debug/tempTX23069.jpg new file mode 100755 index 0000000..9c3625f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX23069.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX23450.jpg b/oscardata/oscardata/bin/Debug/tempTX23450.jpg new file mode 100755 index 0000000..8623697 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX23450.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX23488.jpg b/oscardata/oscardata/bin/Debug/tempTX23488.jpg new file mode 100755 index 0000000..83180fc Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX23488.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX24436.jpg b/oscardata/oscardata/bin/Debug/tempTX24436.jpg new file mode 100755 index 0000000..a8cd562 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX24436.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX25391.jpg b/oscardata/oscardata/bin/Debug/tempTX25391.jpg new file mode 100755 index 0000000..a8e9315 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX25391.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX25492.jpg b/oscardata/oscardata/bin/Debug/tempTX25492.jpg new file mode 100755 index 0000000..b27543b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX25492.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX25779.jpg b/oscardata/oscardata/bin/Debug/tempTX25779.jpg new file mode 100755 index 0000000..bf04b1a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX25779.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX26608.jpg b/oscardata/oscardata/bin/Debug/tempTX26608.jpg new file mode 100755 index 0000000..5fb28ba Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX26608.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX27079.jpg b/oscardata/oscardata/bin/Debug/tempTX27079.jpg new file mode 100755 index 0000000..3cba822 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX27079.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX27150.jpg b/oscardata/oscardata/bin/Debug/tempTX27150.jpg new file mode 100755 index 0000000..7ba0468 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX27150.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX28197.jpg b/oscardata/oscardata/bin/Debug/tempTX28197.jpg new file mode 100755 index 0000000..37d9fd1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX28197.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX28243.jpg b/oscardata/oscardata/bin/Debug/tempTX28243.jpg new file mode 100755 index 0000000..03d9e3a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX28243.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX30588.jpg b/oscardata/oscardata/bin/Debug/tempTX30588.jpg new file mode 100755 index 0000000..629d95a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX30588.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX31031.jpg b/oscardata/oscardata/bin/Debug/tempTX31031.jpg new file mode 100755 index 0000000..cba9667 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX31031.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX31082.jpg b/oscardata/oscardata/bin/Debug/tempTX31082.jpg new file mode 100755 index 0000000..72a8278 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX31082.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX3225.jpg b/oscardata/oscardata/bin/Debug/tempTX3225.jpg new file mode 100755 index 0000000..3d46346 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX3225.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX32547.jpg b/oscardata/oscardata/bin/Debug/tempTX32547.jpg new file mode 100755 index 0000000..91aa73e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX32547.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX33526.jpg b/oscardata/oscardata/bin/Debug/tempTX33526.jpg new file mode 100755 index 0000000..688cc46 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX33526.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX33670.jpg b/oscardata/oscardata/bin/Debug/tempTX33670.jpg new file mode 100755 index 0000000..8b2e5cc Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX33670.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX34413.jpg b/oscardata/oscardata/bin/Debug/tempTX34413.jpg new file mode 100755 index 0000000..68afd2d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX34413.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX34852.jpg b/oscardata/oscardata/bin/Debug/tempTX34852.jpg new file mode 100755 index 0000000..32a74fa Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX34852.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX35222.jpg b/oscardata/oscardata/bin/Debug/tempTX35222.jpg new file mode 100755 index 0000000..346501a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX35222.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX35806.jpg b/oscardata/oscardata/bin/Debug/tempTX35806.jpg new file mode 100755 index 0000000..3d9fbb8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX35806.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX36280.jpg b/oscardata/oscardata/bin/Debug/tempTX36280.jpg new file mode 100755 index 0000000..ee5a6f3 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX36280.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX36485.jpg b/oscardata/oscardata/bin/Debug/tempTX36485.jpg new file mode 100755 index 0000000..c7a1b8b Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX36485.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX36517.jpg b/oscardata/oscardata/bin/Debug/tempTX36517.jpg new file mode 100755 index 0000000..3d9fbb8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX36517.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX36836.jpg b/oscardata/oscardata/bin/Debug/tempTX36836.jpg new file mode 100755 index 0000000..ae44c0f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX36836.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX36850.jpg b/oscardata/oscardata/bin/Debug/tempTX36850.jpg new file mode 100755 index 0000000..5b35a31 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX36850.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX37011.jpg b/oscardata/oscardata/bin/Debug/tempTX37011.jpg new file mode 100755 index 0000000..b82d805 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX37011.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX38118.jpg b/oscardata/oscardata/bin/Debug/tempTX38118.jpg new file mode 100755 index 0000000..cbccb3c Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX38118.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX38440.jpg b/oscardata/oscardata/bin/Debug/tempTX38440.jpg new file mode 100755 index 0000000..ee5a6f3 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX38440.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX39635.jpg b/oscardata/oscardata/bin/Debug/tempTX39635.jpg new file mode 100755 index 0000000..5444320 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX39635.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX39706.jpg b/oscardata/oscardata/bin/Debug/tempTX39706.jpg new file mode 100755 index 0000000..2ec4286 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX39706.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX40116.jpg b/oscardata/oscardata/bin/Debug/tempTX40116.jpg new file mode 100755 index 0000000..a440af9 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX40116.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX44050.jpg b/oscardata/oscardata/bin/Debug/tempTX44050.jpg new file mode 100755 index 0000000..cd07594 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX44050.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX46061.jpg b/oscardata/oscardata/bin/Debug/tempTX46061.jpg new file mode 100755 index 0000000..85d25bf Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX46061.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX4672.jpg b/oscardata/oscardata/bin/Debug/tempTX4672.jpg new file mode 100755 index 0000000..e40344a Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX4672.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX47165.jpg b/oscardata/oscardata/bin/Debug/tempTX47165.jpg new file mode 100755 index 0000000..20957b1 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX47165.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX48249.jpg b/oscardata/oscardata/bin/Debug/tempTX48249.jpg new file mode 100755 index 0000000..72a8278 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX48249.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX48459.jpg b/oscardata/oscardata/bin/Debug/tempTX48459.jpg new file mode 100755 index 0000000..5d31c4f Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX48459.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX48484.jpg b/oscardata/oscardata/bin/Debug/tempTX48484.jpg new file mode 100755 index 0000000..33071af Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX48484.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX49358.jpg b/oscardata/oscardata/bin/Debug/tempTX49358.jpg new file mode 100755 index 0000000..56c4cc4 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX49358.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX49364.jpg b/oscardata/oscardata/bin/Debug/tempTX49364.jpg new file mode 100755 index 0000000..a1ec562 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX49364.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX49440.jpg b/oscardata/oscardata/bin/Debug/tempTX49440.jpg new file mode 100755 index 0000000..caec140 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX49440.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX50291.jpg b/oscardata/oscardata/bin/Debug/tempTX50291.jpg new file mode 100755 index 0000000..36ef7d3 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX50291.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX50292.jpg b/oscardata/oscardata/bin/Debug/tempTX50292.jpg new file mode 100755 index 0000000..c9dea9e Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX50292.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX50352.jpg b/oscardata/oscardata/bin/Debug/tempTX50352.jpg new file mode 100755 index 0000000..83b4450 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX50352.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX51464.jpg b/oscardata/oscardata/bin/Debug/tempTX51464.jpg new file mode 100755 index 0000000..734aa25 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX51464.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX51976.jpg b/oscardata/oscardata/bin/Debug/tempTX51976.jpg new file mode 100755 index 0000000..230e4ab Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX51976.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX5478.jpg b/oscardata/oscardata/bin/Debug/tempTX5478.jpg new file mode 100755 index 0000000..20e201d Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX5478.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX55396.jpg b/oscardata/oscardata/bin/Debug/tempTX55396.jpg new file mode 100755 index 0000000..50e57e8 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX55396.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX55706.jpg b/oscardata/oscardata/bin/Debug/tempTX55706.jpg new file mode 100755 index 0000000..edea434 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX55706.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX56901.jpg b/oscardata/oscardata/bin/Debug/tempTX56901.jpg new file mode 100755 index 0000000..f92f282 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX56901.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX56962.jpg b/oscardata/oscardata/bin/Debug/tempTX56962.jpg new file mode 100755 index 0000000..2ec4286 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX56962.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX58707.jpg b/oscardata/oscardata/bin/Debug/tempTX58707.jpg new file mode 100755 index 0000000..b4d0ecd Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX58707.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX60245.jpg b/oscardata/oscardata/bin/Debug/tempTX60245.jpg new file mode 100755 index 0000000..7e0d697 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX60245.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX63376.jpg b/oscardata/oscardata/bin/Debug/tempTX63376.jpg new file mode 100755 index 0000000..4acfb93 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX63376.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX64035.jpg b/oscardata/oscardata/bin/Debug/tempTX64035.jpg new file mode 100755 index 0000000..d263b12 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX64035.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX64471.jpg b/oscardata/oscardata/bin/Debug/tempTX64471.jpg new file mode 100755 index 0000000..83180fc Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX64471.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX64553.jpg b/oscardata/oscardata/bin/Debug/tempTX64553.jpg new file mode 100755 index 0000000..d98b239 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX64553.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX6513.jpg b/oscardata/oscardata/bin/Debug/tempTX6513.jpg new file mode 100755 index 0000000..a1011de Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX6513.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX781.jpg b/oscardata/oscardata/bin/Debug/tempTX781.jpg new file mode 100755 index 0000000..596cbc2 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX781.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tempTX931.jpg b/oscardata/oscardata/bin/Debug/tempTX931.jpg new file mode 100755 index 0000000..9e06537 Binary files /dev/null and b/oscardata/oscardata/bin/Debug/tempTX931.jpg differ diff --git a/oscardata/oscardata/bin/Debug/tmp.html b/oscardata/oscardata/bin/Debug/tmp.html new file mode 100755 index 0000000..65b204f --- /dev/null +++ b/oscardata/oscardata/bin/Debug/tmp.html @@ -0,0 +1,11 @@ +

Ernst, DL1EV : Platinenservice:

+

dl1ev@gmx.de

+

 

+

Bitte sendet dem Ernst eine e-mail mit eurem +Platinenwunsch, er wird euch dann alle erforderlichen Informationen zukommen +lassen.

+

Please send an e-mai +l to dl1ev containing +the PCB you need. He will send you all required information.

+

 

diff --git a/oscardata/oscardata/bin/Release/image.bin b/oscardata/oscardata/bin/Release/image.bin new file mode 100755 index 0000000..b830248 Binary files /dev/null and b/oscardata/oscardata/bin/Release/image.bin differ diff --git a/oscardata/oscardata/bin/Release/oscardata.exe b/oscardata/oscardata/bin/Release/oscardata.exe new file mode 100755 index 0000000..885c268 Binary files /dev/null and b/oscardata/oscardata/bin/Release/oscardata.exe differ diff --git a/oscardata/oscardata/bin/Release/oscardata.exe.config b/oscardata/oscardata/bin/Release/oscardata.exe.config new file mode 100755 index 0000000..e743be0 --- /dev/null +++ b/oscardata/oscardata/bin/Release/oscardata.exe.config @@ -0,0 +1,6 @@ + + + + + + diff --git a/oscardata/oscardata/bin/Release/oscardata.pdb b/oscardata/oscardata/bin/Release/oscardata.pdb new file mode 100755 index 0000000..379292a Binary files /dev/null and b/oscardata/oscardata/bin/Release/oscardata.pdb differ diff --git a/oscardata/oscardata/bin/Release/rxdata.jpg b/oscardata/oscardata/bin/Release/rxdata.jpg new file mode 100755 index 0000000..b59d050 Binary files /dev/null and b/oscardata/oscardata/bin/Release/rxdata.jpg differ diff --git a/oscardata/oscardata/bin/Release/temp100.jpg b/oscardata/oscardata/bin/Release/temp100.jpg new file mode 100755 index 0000000..b59d050 Binary files /dev/null and b/oscardata/oscardata/bin/Release/temp100.jpg differ diff --git a/oscardata/oscardata/bin/Release/temp182.jpg b/oscardata/oscardata/bin/Release/temp182.jpg new file mode 100755 index 0000000..c74d588 Binary files /dev/null and b/oscardata/oscardata/bin/Release/temp182.jpg differ diff --git a/oscardata/oscardata/bin/Release/temp532.jpg b/oscardata/oscardata/bin/Release/temp532.jpg new file mode 100644 index 0000000..59fd7df Binary files /dev/null and b/oscardata/oscardata/bin/Release/temp532.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX11032.jpg b/oscardata/oscardata/bin/Release/tempTX11032.jpg new file mode 100644 index 0000000..f784c90 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX11032.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX13323.jpg b/oscardata/oscardata/bin/Release/tempTX13323.jpg new file mode 100755 index 0000000..3abfc1b Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX13323.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX14056.jpg b/oscardata/oscardata/bin/Release/tempTX14056.jpg new file mode 100644 index 0000000..7ef72b8 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX14056.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX1443.jpg b/oscardata/oscardata/bin/Release/tempTX1443.jpg new file mode 100644 index 0000000..ab8cd88 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX1443.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX17133.jpg b/oscardata/oscardata/bin/Release/tempTX17133.jpg new file mode 100644 index 0000000..a9baa9f Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX17133.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX17258.jpg b/oscardata/oscardata/bin/Release/tempTX17258.jpg new file mode 100644 index 0000000..c466529 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX17258.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX21183.jpg b/oscardata/oscardata/bin/Release/tempTX21183.jpg new file mode 100644 index 0000000..4ddfced Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX21183.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX21964.jpg b/oscardata/oscardata/bin/Release/tempTX21964.jpg new file mode 100644 index 0000000..f495a0e Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX21964.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX2239.jpg b/oscardata/oscardata/bin/Release/tempTX2239.jpg new file mode 100644 index 0000000..b591ee9 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX2239.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX24380.jpg b/oscardata/oscardata/bin/Release/tempTX24380.jpg new file mode 100644 index 0000000..a794e10 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX24380.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX25406.jpg b/oscardata/oscardata/bin/Release/tempTX25406.jpg new file mode 100644 index 0000000..4d57a11 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX25406.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX2657.jpg b/oscardata/oscardata/bin/Release/tempTX2657.jpg new file mode 100644 index 0000000..d21a49f Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX2657.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX26606.jpg b/oscardata/oscardata/bin/Release/tempTX26606.jpg new file mode 100644 index 0000000..c002c89 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX26606.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX27585.jpg b/oscardata/oscardata/bin/Release/tempTX27585.jpg new file mode 100644 index 0000000..6447ca1 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX27585.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX30789.jpg b/oscardata/oscardata/bin/Release/tempTX30789.jpg new file mode 100644 index 0000000..bd10713 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX30789.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX31938.jpg b/oscardata/oscardata/bin/Release/tempTX31938.jpg new file mode 100755 index 0000000..ee5a6f3 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX31938.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX32938.jpg b/oscardata/oscardata/bin/Release/tempTX32938.jpg new file mode 100644 index 0000000..bd10713 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX32938.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX33365.jpg b/oscardata/oscardata/bin/Release/tempTX33365.jpg new file mode 100644 index 0000000..ab59bcc Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX33365.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX39607.jpg b/oscardata/oscardata/bin/Release/tempTX39607.jpg new file mode 100644 index 0000000..c002c89 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX39607.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX4017.jpg b/oscardata/oscardata/bin/Release/tempTX4017.jpg new file mode 100644 index 0000000..73736f5 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX4017.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX43707.jpg b/oscardata/oscardata/bin/Release/tempTX43707.jpg new file mode 100644 index 0000000..1c43a03 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX43707.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX48749.jpg b/oscardata/oscardata/bin/Release/tempTX48749.jpg new file mode 100644 index 0000000..f8cbf40 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX48749.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX4991.jpg b/oscardata/oscardata/bin/Release/tempTX4991.jpg new file mode 100644 index 0000000..e6f5086 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX4991.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX52559.jpg b/oscardata/oscardata/bin/Release/tempTX52559.jpg new file mode 100644 index 0000000..bd10713 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX52559.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX53265.jpg b/oscardata/oscardata/bin/Release/tempTX53265.jpg new file mode 100644 index 0000000..69796e2 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX53265.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX54192.jpg b/oscardata/oscardata/bin/Release/tempTX54192.jpg new file mode 100644 index 0000000..411155b Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX54192.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX55479.jpg b/oscardata/oscardata/bin/Release/tempTX55479.jpg new file mode 100644 index 0000000..4f94604 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX55479.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX55856.jpg b/oscardata/oscardata/bin/Release/tempTX55856.jpg new file mode 100644 index 0000000..69cf651 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX55856.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX56233.jpg b/oscardata/oscardata/bin/Release/tempTX56233.jpg new file mode 100644 index 0000000..f784c90 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX56233.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX56426.jpg b/oscardata/oscardata/bin/Release/tempTX56426.jpg new file mode 100644 index 0000000..9819806 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX56426.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX57228.jpg b/oscardata/oscardata/bin/Release/tempTX57228.jpg new file mode 100644 index 0000000..4f94604 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX57228.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX58983.jpg b/oscardata/oscardata/bin/Release/tempTX58983.jpg new file mode 100644 index 0000000..ed5b326 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX58983.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX59337.jpg b/oscardata/oscardata/bin/Release/tempTX59337.jpg new file mode 100644 index 0000000..9927f28 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX59337.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX59408.jpg b/oscardata/oscardata/bin/Release/tempTX59408.jpg new file mode 100644 index 0000000..6306a10 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX59408.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX61001.jpg b/oscardata/oscardata/bin/Release/tempTX61001.jpg new file mode 100644 index 0000000..6447ca1 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX61001.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX61544.jpg b/oscardata/oscardata/bin/Release/tempTX61544.jpg new file mode 100644 index 0000000..f84287d Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX61544.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX7241.jpg b/oscardata/oscardata/bin/Release/tempTX7241.jpg new file mode 100644 index 0000000..46e20b7 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX7241.jpg differ diff --git a/oscardata/oscardata/bin/Release/tempTX7747.jpg b/oscardata/oscardata/bin/Release/tempTX7747.jpg new file mode 100644 index 0000000..9927f28 Binary files /dev/null and b/oscardata/oscardata/bin/Release/tempTX7747.jpg differ diff --git a/oscardata/oscardata/bin/Release/tmp.html b/oscardata/oscardata/bin/Release/tmp.html new file mode 100644 index 0000000..a7fcdb0 --- /dev/null +++ b/oscardata/oscardata/bin/Release/tmp.html @@ -0,0 +1,20 @@ +PK????[=1?I?w??????-??>??e????jX?%o???f??B???h???L?lw???#/p??bY?q?)5y???? ??????}R?N'??????N????9?????In&TSg???T2Vye_?????[S???=?????????????e???|Z????????mnN?????c?j??k`3??????&???!?2???9?&??????lC????/????? k???i?{???????????k{57yn?L??l???????v????F?????????WWsm?eeS]?L???iF?0U????"?????T&+?Lk?????p??\??????mqe2?????!???????:?????????????vW?`???$???6??3??5?L??|B?????#?iI? ??:?M????? ?7?_O?18???K8?Q'??????????2????UR?0???+?Uyef???u??G]h?t7??uN??f8u????9?I??xmK????Qg +?? `?b???[m4deL???r&??~)?mH?? +?6??xc?fT_i%?/???8????I???*?s???3????o?????ax?? -6?B?\???fAD?\ w?s?S?Ja?1g????, Nrr,L????ypi?R?0?)?&???? VW8?Ug???;?????? _????W?0?&??7?????w?U????? ???}??n???Ez?Ai??s???OG?, K?%h??y?:?/?x '??J!??!?:??*rI)G??(:???????dT????ON_?P?SP?i??nz??J?> ??;???P?2 8?[bh?J ?6^?D?)??p,8???\??9?A?b/????B????? u?nsw??????o????p?&*???\??@T ???q#?'e"?Q???????f?7?Y'? +???????2?a?:???{t?????m?^??.fy?????? +?6w.????????z???t?+?RHV?$?HrjJ??}+? +?GY1)??Rv +?[??A? ?#?7???8?A?:Q?I????7??G?efV???J?rk???y???S???]????!???!-' ?????)?U`????jo6lV?_D??B???=h????!??o??yP??K???????????d??"8?????5$S?k???E{O?????B?v? ?CK?~n?Sq???f???H?I???v??]?????? ???Y??b&f??K?ol?V??*????5?\?L??p? +5??c????*7wu^??Th?B?d???W????f?n?gf?t??k??X?oe/'~?|?p? +6[:?@?PeWe,???~??*p???{{S????T{?`,?4??j? ??.?dL9 ??????? ?#?I?8]???u?????y?b??d?*?U?RK?L?? +{??B?9O??????u?x?????t??#??e?? ??????????_??}??=??4?J?Q??{???9?#??? ???.??????|({Z?Y?l?~?? ?????R?????b???sc??|????ezbu????????????,&??c?a???e?J?4?Y?L???&8]??$??????f?]HY?? ?????L8N +R??pR??????g1????*??f????????#3X??a?c???EGHn?]??;?I???X?p??+^9 ???+1??????????}d?? 3?*??(????????}_K??{q?? ?????[L?q$ ?S?q?????3ZR=x?2hN?5???Z??k?9?7???].?&L???????kG?{??:???;??-?oNEt???E??'?T~9??uMD^???????#??:7??x?n?W???@?2C???l??$;??{??h?QG??Dv??t-????)?u9???S????0?????P??3iz?l? +??u?u??Ii? ??z8??c???-????:?hh/???/??P.?????\{o6??Qs??????~~???uop??~')?:?2?:?? 0) + { + String[] iparr = new String[anz]; + int num = 0; + for (int i = 0; i < addr.Length; i++) + { + if (addr[i].AddressFamily == AddressFamily.InterNetwork) + { + iparr[num] = addr[i].ToString(); + //Console.WriteLine("1:found my ip " + iparr[num]); + num++; + } + } + + return iparr; + } + } + catch + { } + + try + { + List ipList = new List(); + foreach (var ni in System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces()) + { + foreach (var ua in ni.GetIPProperties().UnicastAddresses) + { + if (ua.Address.AddressFamily == AddressFamily.InterNetwork) + { + //Console.WriteLine("2:found my ip " + ua.Address.ToString()); + ipList.Add(ua.Address); + } + } + } + + if (ipList.Count > 0) + { + String[] iparr = new String[ipList.Count]; + int i = 0; + foreach (IPAddress v in ipList) + { + iparr[i++] = v.ToString(); + } + return iparr; + } + } + catch { } + Console.WriteLine("not found"); + return null; + } + + public static byte[] StringToByteArray(string str) + { + System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); + return enc.GetBytes(str); + } + + public static string ByteArrayToString(byte[] arr) + { + Byte[] ba = new byte[arr.Length]; + int dst = 0; + for (int i = 0; i < arr.Length; i++) + { + if (arr[i] != 0) ba[dst++] = arr[i]; + } + Byte[] ban = new byte[dst]; + Array.Copy(ba, ban, dst); + + System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); + return enc.GetString(ban); + } + + public static String addTmpPath(String fn) + { + if (statics.ostype == 0) + { + // Windows + return Application.UserAppDataPath + "\\" + fn; + } + else + { + // Linux + return "/tmp/" + fn; + } + } + + public static String getHomePath(String subpath, String filename) + { + String home = Application.UserAppDataPath; + String deli = "/"; + + if (statics.ostype == 0) + deli = "\\"; + + + if(subpath.Length == 0) + home = home + deli + DataStorage + deli; + else + home = home + deli + DataStorage + deli + subpath + deli; + + try + { + Directory.CreateDirectory(home); + } + catch { } + + + try + { + if (Directory.Exists(home) == false) + Console.WriteLine("create:" + home); + } + catch { } + + return home + filename; + } + + // Returns the file's size. + public static long GetFileSize(string file_name) + { + return new FileInfo(file_name).Length; + } + + // returns the filename of a path+filename string + public static String pureFilename(String fullfn) + { + // extract just the filename without a path + String fn; + int idx = fullfn.LastIndexOf('/'); + if (idx == -1) idx = fullfn.LastIndexOf('\\'); + if (idx == -1) + { + // fullfn does not contain a path + return fullfn; + } + else + { + // just the filename + try { fn = fullfn.Substring(idx + 1); } + catch { fn = fullfn; } + return fn; + } + } + + // returns only the path of a Path+Filename string + public static String purePath(String fullfn) + { + int idx = fullfn.LastIndexOf('/'); + if (idx == -1) idx = fullfn.LastIndexOf('\\'); + if (idx == -1) + return "."; + else + return fullfn.Substring(0, idx); + } + + // check if an image is a valid image + public static bool checkImage(String imgfn) + { + try + { + using (Image dummy = new Bitmap(imgfn)) + { + // valid image + } + return true; + } + catch + { + // invalid image + return false; + } + } + + // add a file extension or replace an existing one + public static String AddReplaceFileExtension(String fn, String ext) + { + int idx = fn.IndexOf('.'); + if(idx == -1) + { + // filename has no '.' + return fn + "." + ext; + } + + // filename has a '.' + // if the '.' is the first char, then the filename is invalid + if (idx == 0) return fn; + + return fn.Substring(0, idx) + "." + ext; + } + } +} diff --git a/oscardata/oscardata/crc.cs b/oscardata/oscardata/crc.cs new file mode 100755 index 0000000..cd6d0db --- /dev/null +++ b/oscardata/oscardata/crc.cs @@ -0,0 +1,37 @@ +using System; + +namespace oscardata +{ + class Crc + { + UInt16 reg16 = 0xffff; // Schieberegister + + UInt16 crc16_bytecalc(Byte byt) + { + int i; + UInt16 polynom = 0x8408; // Generatorpolynom + + for (i = 0; i < 8; ++i) + { + if ((reg16 & 1) != (byt & 1)) + reg16 = (UInt16)((reg16 >> 1) ^ polynom); + else + reg16 >>= 1; + byt >>= 1; + } + return reg16; // inverses Ergebnis, MSB zuerst + } + + public UInt16 crc16_messagecalc(Byte[] data, int len) + { + int i; + + reg16 = 0xffff; // Initialisiere Shift-Register mit Startwert + for (i = 0; i < len; i++) + { + reg16 = crc16_bytecalc(data[i]); // Berechne fuer jeweils 8 Bit der Nachricht + } + return reg16; + } + } +} diff --git a/oscardata/oscardata/imagehandler.cs b/oscardata/oscardata/imagehandler.cs new file mode 100755 index 0000000..2baaee4 --- /dev/null +++ b/oscardata/oscardata/imagehandler.cs @@ -0,0 +1,119 @@ +using System; +using System.Drawing; +using System.Drawing.Drawing2D; +using System.Drawing.Imaging; +using System.IO; +using System.Windows.Forms; + +namespace oscardata +{ + public class Imagehandler + { + // Save the file with a specific compression level. + private void SaveJpg(Image image, string file_name, long compression) + { + try + { + EncoderParameters encoder_params = new EncoderParameters(1); + encoder_params.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, compression); + + ImageCodecInfo image_codec_info = GetEncoderInfo("image/jpeg"); + File.Delete(file_name); + image.Save(file_name, image_codec_info, encoder_params); + } + catch (Exception ex) + { + MessageBox.Show("Error saving file '" + file_name + + "'\nTry a different file name.\n" + ex.Message, + "Save Error", MessageBoxButtons.OK, + MessageBoxIcon.Error); + } + } + + // Return an ImageCodecInfo object for this mime type. + private ImageCodecInfo GetEncoderInfo(string mime_type) + { + ImageCodecInfo[] encoders = ImageCodecInfo.GetImageEncoders(); + for (int i = 0; i <= encoders.Length; i++) + { + if (encoders[i].MimeType == mime_type) return encoders[i]; + } + return null; + } + + // Save the file with the indicated maximum file size. + // Return the compression level used. + public int SaveJpgAtFileSize(Image image, string file_name, long max_size) + { + for (int level = 100; level > 5; level -= 5) + { + // Try saving at this compression level. + SaveJpg(image, file_name, level); + + // If the file is small enough, we're done. + if (statics.GetFileSize(file_name) <= max_size) + return level; + } + return 5; + } + + public Bitmap ResizeImage(Image image, int width, int height, String callsign) + { + // get original size of img + int x = image.Width; + int y = image.Height; + + // scale the greater size to the destination size + double relx = (double)width / (double)x; + double rely = (double)height / (double)y; + int nw = (int)((double)x * relx); + int nh = (int)((double)y * relx); + if (rely < relx) + { + nw = (int)((double)x * rely); + nh = (int)((double)y * rely); + } + + Bitmap destImage = new Bitmap(nw, nh); + using (Graphics g = Graphics.FromImage(destImage)) + { + g.DrawImage(image, 0, 0, nw, nh); + if (callsign != "") + { + using (var fnt = new Font("Verdana", 15.0f)) + { + var size = g.MeasureString(callsign, fnt); + var rect = new RectangleF(5, 5, size.Width, size.Height); + SolidBrush opaqueBrush = new SolidBrush(Color.FromArgb(128, 255, 255, 255)); + + g.FillRectangle(opaqueBrush, rect); + g.DrawString(callsign, fnt, Brushes.Blue, 5, 5); + } + } + } + + return destImage; + } + + // gets a receive payload, reconstruct the image + // type: 2=start, 3=cont + public void receive_image(Byte[] rxdata, int minfo) + { + BinaryWriter writer = null; + + if (minfo == statics.FirstFrame) + { + // image starts, create destination file + writer = new BinaryWriter(File.Open(statics.jpg_tempfilename, FileMode.Create)); + writer.Write(rxdata); + } + else + { + // continue with image + writer = new BinaryWriter(File.Open(statics.jpg_tempfilename, FileMode.Append)); + writer.Write(rxdata); + } + writer.Close(); + } + } +} diff --git a/oscardata/oscardata/obj/Debug/.NETFramework,Version=v4.0.AssemblyAttributes.cs b/oscardata/oscardata/obj/Debug/.NETFramework,Version=v4.0.AssemblyAttributes.cs new file mode 100755 index 0000000..9e65edd --- /dev/null +++ b/oscardata/oscardata/obj/Debug/.NETFramework,Version=v4.0.AssemblyAttributes.cs @@ -0,0 +1,4 @@ +// +using System; +using System.Reflection; +[assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.0", FrameworkDisplayName = ".NET Framework 4")] diff --git a/oscardata/oscardata/obj/Debug/.NETFramework,Version=v4.5.AssemblyAttributes.cs b/oscardata/oscardata/obj/Debug/.NETFramework,Version=v4.5.AssemblyAttributes.cs new file mode 100755 index 0000000..182bcf0 --- /dev/null +++ b/oscardata/oscardata/obj/Debug/.NETFramework,Version=v4.5.AssemblyAttributes.cs @@ -0,0 +1,4 @@ +// +using System; +using System.Reflection; +[assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ".NET Framework 4.5")] diff --git a/oscardata/oscardata/obj/Debug/DesignTimeResolveAssemblyReferences.cache b/oscardata/oscardata/obj/Debug/DesignTimeResolveAssemblyReferences.cache new file mode 100755 index 0000000..abeccf2 Binary files /dev/null and b/oscardata/oscardata/obj/Debug/DesignTimeResolveAssemblyReferences.cache differ diff --git a/oscardata/oscardata/obj/Debug/DesignTimeResolveAssemblyReferencesInput.cache b/oscardata/oscardata/obj/Debug/DesignTimeResolveAssemblyReferencesInput.cache new file mode 100755 index 0000000..07444ae Binary files /dev/null and b/oscardata/oscardata/obj/Debug/DesignTimeResolveAssemblyReferencesInput.cache differ diff --git a/oscardata/oscardata/obj/Debug/TempPE/Properties.Resources.Designer.cs.dll b/oscardata/oscardata/obj/Debug/TempPE/Properties.Resources.Designer.cs.dll new file mode 100755 index 0000000..f07492d Binary files /dev/null and b/oscardata/oscardata/obj/Debug/TempPE/Properties.Resources.Designer.cs.dll differ diff --git a/oscardata/oscardata/obj/Debug/oscardata.Form1.resources b/oscardata/oscardata/obj/Debug/oscardata.Form1.resources new file mode 100755 index 0000000..a1d32c7 Binary files /dev/null and b/oscardata/oscardata/obj/Debug/oscardata.Form1.resources differ diff --git a/oscardata/oscardata/obj/Debug/oscardata.Properties.Resources.resources b/oscardata/oscardata/obj/Debug/oscardata.Properties.Resources.resources new file mode 100755 index 0000000..a3fc1b2 Binary files /dev/null and b/oscardata/oscardata/obj/Debug/oscardata.Properties.Resources.resources differ diff --git a/oscardata/oscardata/obj/Debug/oscardata.csproj.CoreCompileInputs.cache b/oscardata/oscardata/obj/Debug/oscardata.csproj.CoreCompileInputs.cache new file mode 100755 index 0000000..b84666a --- /dev/null +++ b/oscardata/oscardata/obj/Debug/oscardata.csproj.CoreCompileInputs.cache @@ -0,0 +1 @@ +8b1ddd70a4415b8d1df4efbc0e8e471bfcd6bae2 diff --git a/oscardata/oscardata/obj/Debug/oscardata.csproj.FileListAbsolute.txt b/oscardata/oscardata/obj/Debug/oscardata.csproj.FileListAbsolute.txt new file mode 100755 index 0000000..ddc3c79 --- /dev/null +++ b/oscardata/oscardata/obj/Debug/oscardata.csproj.FileListAbsolute.txt @@ -0,0 +1,10 @@ +E:\funk\gnuradio\oscardata\oscardata\bin\Debug\oscardata.exe.config +E:\funk\gnuradio\oscardata\oscardata\bin\Debug\oscardata.exe +E:\funk\gnuradio\oscardata\oscardata\bin\Debug\oscardata.pdb +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.Form1.resources +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.Properties.Resources.resources +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.csproj.GenerateResource.Cache +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.csproj.CoreCompileInputs.cache +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.exe +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.pdb +E:\funk\gnuradio\oscardata\oscardata\obj\Debug\oscardata.csprojAssemblyReference.cache diff --git a/oscardata/oscardata/obj/Debug/oscardata.csproj.GenerateResource.cache b/oscardata/oscardata/obj/Debug/oscardata.csproj.GenerateResource.cache new file mode 100755 index 0000000..a9cf318 Binary files /dev/null and b/oscardata/oscardata/obj/Debug/oscardata.csproj.GenerateResource.cache differ diff --git a/oscardata/oscardata/obj/Debug/oscardata.csprojAssemblyReference.cache b/oscardata/oscardata/obj/Debug/oscardata.csprojAssemblyReference.cache new file mode 100755 index 0000000..7177511 Binary files /dev/null and b/oscardata/oscardata/obj/Debug/oscardata.csprojAssemblyReference.cache differ diff --git a/oscardata/oscardata/obj/Debug/oscardata.exe b/oscardata/oscardata/obj/Debug/oscardata.exe new file mode 100755 index 0000000..6e65fa5 Binary files /dev/null and b/oscardata/oscardata/obj/Debug/oscardata.exe differ diff --git a/oscardata/oscardata/obj/Debug/oscardata.pdb b/oscardata/oscardata/obj/Debug/oscardata.pdb new file mode 100755 index 0000000..8beccbd Binary files /dev/null and b/oscardata/oscardata/obj/Debug/oscardata.pdb differ diff --git a/oscardata/oscardata/obj/Release/.NETFramework,Version=v4.0.AssemblyAttributes.cs b/oscardata/oscardata/obj/Release/.NETFramework,Version=v4.0.AssemblyAttributes.cs new file mode 100755 index 0000000..9e65edd --- /dev/null +++ b/oscardata/oscardata/obj/Release/.NETFramework,Version=v4.0.AssemblyAttributes.cs @@ -0,0 +1,4 @@ +// +using System; +using System.Reflection; +[assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.0", FrameworkDisplayName = ".NET Framework 4")] diff --git a/oscardata/oscardata/obj/Release/.NETFramework,Version=v4.5.AssemblyAttributes.cs b/oscardata/oscardata/obj/Release/.NETFramework,Version=v4.5.AssemblyAttributes.cs new file mode 100755 index 0000000..182bcf0 --- /dev/null +++ b/oscardata/oscardata/obj/Release/.NETFramework,Version=v4.5.AssemblyAttributes.cs @@ -0,0 +1,4 @@ +// +using System; +using System.Reflection; +[assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ".NET Framework 4.5")] diff --git a/oscardata/oscardata/obj/Release/DesignTimeResolveAssemblyReferences.cache b/oscardata/oscardata/obj/Release/DesignTimeResolveAssemblyReferences.cache new file mode 100755 index 0000000..df5578d Binary files /dev/null and b/oscardata/oscardata/obj/Release/DesignTimeResolveAssemblyReferences.cache differ diff --git a/oscardata/oscardata/obj/Release/DesignTimeResolveAssemblyReferencesInput.cache b/oscardata/oscardata/obj/Release/DesignTimeResolveAssemblyReferencesInput.cache new file mode 100755 index 0000000..e50d701 Binary files /dev/null and b/oscardata/oscardata/obj/Release/DesignTimeResolveAssemblyReferencesInput.cache differ diff --git a/oscardata/oscardata/obj/Release/TempPE/Properties.Resources.Designer.cs.dll b/oscardata/oscardata/obj/Release/TempPE/Properties.Resources.Designer.cs.dll new file mode 100755 index 0000000..32214a4 Binary files /dev/null and b/oscardata/oscardata/obj/Release/TempPE/Properties.Resources.Designer.cs.dll differ diff --git a/oscardata/oscardata/obj/Release/oscardata.Form1.resources b/oscardata/oscardata/obj/Release/oscardata.Form1.resources new file mode 100755 index 0000000..a1d32c7 Binary files /dev/null and b/oscardata/oscardata/obj/Release/oscardata.Form1.resources differ diff --git a/oscardata/oscardata/obj/Release/oscardata.Properties.Resources.resources b/oscardata/oscardata/obj/Release/oscardata.Properties.Resources.resources new file mode 100755 index 0000000..a3fc1b2 Binary files /dev/null and b/oscardata/oscardata/obj/Release/oscardata.Properties.Resources.resources differ diff --git a/oscardata/oscardata/obj/Release/oscardata.csproj.CoreCompileInputs.cache b/oscardata/oscardata/obj/Release/oscardata.csproj.CoreCompileInputs.cache new file mode 100755 index 0000000..6f32516 --- /dev/null +++ b/oscardata/oscardata/obj/Release/oscardata.csproj.CoreCompileInputs.cache @@ -0,0 +1 @@ +650c3889c3141636f761440e7398ad87b17746a1 diff --git a/oscardata/oscardata/obj/Release/oscardata.csproj.FileListAbsolute.txt b/oscardata/oscardata/obj/Release/oscardata.csproj.FileListAbsolute.txt new file mode 100755 index 0000000..78a44cd --- /dev/null +++ b/oscardata/oscardata/obj/Release/oscardata.csproj.FileListAbsolute.txt @@ -0,0 +1,10 @@ +E:\funk\gnuradio\oscardata\oscardata\bin\Release\oscardata.exe.config +E:\funk\gnuradio\oscardata\oscardata\bin\Release\oscardata.exe +E:\funk\gnuradio\oscardata\oscardata\bin\Release\oscardata.pdb +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.Form1.resources +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.Properties.Resources.resources +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.csproj.GenerateResource.Cache +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.csproj.CoreCompileInputs.cache +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.exe +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.pdb +E:\funk\gnuradio\oscardata\oscardata\obj\Release\oscardata.csprojAssemblyReference.cache diff --git a/oscardata/oscardata/obj/Release/oscardata.csproj.GenerateResource.cache b/oscardata/oscardata/obj/Release/oscardata.csproj.GenerateResource.cache new file mode 100755 index 0000000..d895f1c Binary files /dev/null and b/oscardata/oscardata/obj/Release/oscardata.csproj.GenerateResource.cache differ diff --git a/oscardata/oscardata/obj/Release/oscardata.csprojAssemblyReference.cache b/oscardata/oscardata/obj/Release/oscardata.csprojAssemblyReference.cache new file mode 100755 index 0000000..b169610 Binary files /dev/null and b/oscardata/oscardata/obj/Release/oscardata.csprojAssemblyReference.cache differ diff --git a/oscardata/oscardata/obj/Release/oscardata.exe b/oscardata/oscardata/obj/Release/oscardata.exe new file mode 100755 index 0000000..885c268 Binary files /dev/null and b/oscardata/oscardata/obj/Release/oscardata.exe differ diff --git a/oscardata/oscardata/obj/Release/oscardata.pdb b/oscardata/oscardata/obj/Release/oscardata.pdb new file mode 100755 index 0000000..379292a Binary files /dev/null and b/oscardata/oscardata/obj/Release/oscardata.pdb differ diff --git a/oscardata/oscardata/oscardata.csproj b/oscardata/oscardata/oscardata.csproj new file mode 100755 index 0000000..9765068 --- /dev/null +++ b/oscardata/oscardata/oscardata.csproj @@ -0,0 +1,112 @@ + + + + + Debug + AnyCPU + {989BF5C6-36F6-4158-9FB2-42E86D2020DB} + WinExe + oscardata + oscardata + v4.5 + 512 + true + + + + AnyCPU + true + full + false + bin\Debug\ + DEBUG;TRACE + prompt + 4 + false + + + AnyCPU + pdbonly + true + bin\Release\ + TRACE + prompt + 4 + false + + + Satellite-icon.ico + + + + + + + + + + + + + + + + + + + + + + + + Form + + + Form1.cs + + + + + + + + Form1.cs + + + ResXFileCodeGenerator + Resources.Designer.cs + Designer + + + True + Resources.resx + True + + + + SettingsSingleFileGenerator + Settings.Designer.cs + + + True + Settings.settings + True + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/oscardata/oscardata/packages.config b/oscardata/oscardata/packages.config new file mode 100755 index 0000000..5e78969 --- /dev/null +++ b/oscardata/oscardata/packages.config @@ -0,0 +1,4 @@ + + + + \ No newline at end of file diff --git a/oscardata/oscardata/udp.cs b/oscardata/oscardata/udp.cs new file mode 100755 index 0000000..1a566b3 --- /dev/null +++ b/oscardata/oscardata/udp.cs @@ -0,0 +1,350 @@ +/* + * 9/2020 (c) DJ0ABR, Kurt Moraw + * License: GPL 3.0 + * + * udp.cs + * ------ + * + * Creates a new thread which handles all incoming and outgoing UDP traffic. + * Communication to other threads is done via a thread-safe pipe. + * The UDP transmitter handles the correct datarate according to the modem speed. +*/ + +using System; +using System.Collections; +using System.Net; +using System.Net.Sockets; +using System.Threading; + +namespace oscardata +{ + public static class Udp + { + // this thread handles udp RX + static Thread udprx_thread; + static Thread udptx_thread; + + // Pipes for data transferred via UDP ports + static UdpQueue uq_rx = new UdpQueue(); + static UdpQueue uq_tx = new UdpQueue(); + static UdpQueue uq_fft = new UdpQueue(); + static UdpQueue uq_iq = new UdpQueue(); + + public static int searchtimeout = 0; + + // Constructor + // called when Udp is created by the main program + public static void InitUdp() + { + // create thread for UDP RX + udprx_thread = new Thread(new ThreadStart(Udprxloop)); + udprx_thread.Name = "Thread: oscardata UDP-RX"; + udprx_thread.Start(); + + // create thread for UDP TX + udptx_thread = new Thread(new ThreadStart(Udptxloop)); + udptx_thread.Name = "Thread: oscardata UDP-TX"; + udptx_thread.Start(); + } + + public static void Close() + { + try + { + udprx_thread.Abort(); + udptx_thread.Abort(); + } + catch { } + } + + // Udp RX Loop runs in its own thread + static void Udprxloop() + { + // define UDP port + UdpClient udpc = new UdpClient(statics.UdpRXport); + udpc.Client.ReceiveTimeout = 100; + + while (statics.running) + { + try + { + // receive data from UDP port + IPEndPoint RemoteEndpoint = new IPEndPoint(IPAddress.Any, 0); + Byte[] rxarr = udpc.Receive(ref RemoteEndpoint); + if (rxarr != null) + { + // Data received: + // RemoteEndpoint.Address ... IP address of the sender + // RemoteEndpoint.Port ... port + // b[0] ... Type of data + // b+1 ... Byte array containing the data + int rxtype = rxarr[0]; + Byte[] b = new byte[rxarr.Length - 1]; + Array.Copy(rxarr, 1, b, 0, b.Length); + + // payload + if (rxtype == 1) + uq_rx.Add(b); + + // Broadcast response + if (rxtype == 3) + { + statics.ModemIP = RemoteEndpoint.Address.ToString(); + searchtimeout = 0; + } + + // FFT data + if (rxtype == 4) + uq_fft.Add(b); + + // IQ data + if (rxtype == 5) + { + for (int i = 0; i < b.Length; i++) + { + // insert new byte in lastb + for (int sh = 12 - 1; sh > 0; sh--) + lastb[sh] = lastb[sh - 1]; + lastb[0] = b[i]; + + // test if aligned + if (lastb[0] == 0 && lastb[1] == 0 && lastb[2] == 3 && lastb[3] == 0xe8) + { + // we are aligned to a re value + int re = lastb[4]; + re <<= 8; + re += lastb[5]; + re <<= 8; + re += lastb[6]; + re <<= 8; + re += lastb[7]; + + int im = lastb[8]; + im <<= 8; + im += lastb[9]; + im <<= 8; + im += lastb[10]; + im <<= 8; + im += lastb[11]; + + qpskitem q = new qpskitem(); + q.re = re; + q.im = im; + uq_iq.Add(q); + } + else if (lastb[0] == 0xe8 && lastb[1] == 3 && lastb[2] == 0 && lastb[3] == 0) + { + // we are aligned to a re value + int re = lastb[7]; + re <<= 8; + re += lastb[6]; + re <<= 8; + re += lastb[5]; + re <<= 8; + re += lastb[4]; + + int im = lastb[11]; + im <<= 8; + im += lastb[10]; + im <<= 8; + im += lastb[9]; + im <<= 8; + im += lastb[8]; + + qpskitem q = new qpskitem(); + q.re = re; + q.im = im; + uq_iq.Add(q); + } + } + } + } + } + catch { } + } + } + + static AutoResetEvent autoEvent = new AutoResetEvent(false); + + // Udp TX Loop runs in its own thread + static void Udptxloop() + { + UdpClient udpc = new UdpClient(); + + // calculate cycle time for the requested data rate + // time in ms for one bit: 1000/statics.datarate + + int actdatarate = statics.getDatarate(); + int wait_datarate = (int)(((double)statics.UdpBlocklen * 8.0 * 1000.0 / (double)(statics.getDatarate()))); + + Timer TTimer = new Timer(new TimerCallback(TXTickTimer), autoEvent, 0, wait_datarate); + + while (statics.running) + { + autoEvent.WaitOne(); + try + { + if (uq_tx.Count() > 0) + { + // TX data available + Byte[] b = uq_tx.Getarr(); + udpc.Send(b, b.Length, statics.ModemIP, statics.UdpTXport); + } + } + catch (Exception e) + { + String err = e.ToString(); + } + if(statics.getDatarate() != actdatarate) + { + // rate has been changed, reset the timer + wait_datarate = (int)(((double)statics.UdpBlocklen * 8.0 * 1000.0 / (double)(statics.getDatarate()))); + TTimer.Change(0, wait_datarate); + actdatarate = statics.getDatarate(); + } + } + } + + public static void UdpBCsend(Byte[] b, String ip, int port) + { + UdpClient udpc = new UdpClient(); + udpc.EnableBroadcast = true; + udpc.Send(b, b.Length, ip, port); + } + + static void TXTickTimer(object stateInfo) + { + autoEvent = (AutoResetEvent)stateInfo; + + autoEvent.Set(); + } + + // send a Byte array via UDP + // this function can be called from anywhere in the program + // it transfers the data to the udp-tx thread via a thread-safe pipe + public static void UdpSend(Byte[] b) + { + uq_tx.Add(b); + } + + public static int GetBufferCount() + { + return uq_tx.Count(); + } + + public static Byte[] UdpReceive() + { + if (uq_rx.Count() == 0) return null; + + return uq_rx.Getarr(); + } + + public static UInt16[] UdpGetFFT() + { + if (uq_fft.Count() == 0) return null; + + Byte[] d = uq_fft.Getarr(); + UInt16[] varr = new UInt16[d.Length / 2]; + int j = 0; + for (int i = 0; i < d.Length; i += 2) + { + if ((i + 1) >= d.Length) break; + UInt16 us = d[i]; + us <<= 8; + us += d[i + 1]; + if (j >= (varr.Length)) break; + varr[j++] = us; + } + return varr; + } + + static Byte[] lastb = new Byte[12]; + public static qpskitem UdpGetIQ() + { + if (uq_iq.Count() == 0) return null; + + return uq_iq.GetQPSKitem(); + } + } + + // this class is a thread safe queue wich is used + // to exchange data with the UDP RX/TX threads + public class UdpQueue + { + Queue myQ = new Queue(); + + public void Add(Byte [] b) + { + lock (myQ.SyncRoot) + { + myQ.Enqueue(b); + } + } + + public void Add(qpskitem b) + { + lock (myQ.SyncRoot) + { + myQ.Enqueue(b); + } + } + + public Byte [] Getarr() + { + Byte[] b; + + lock (myQ.SyncRoot) + { + b = (Byte [])myQ.Dequeue(); + } + return b; + } + + public qpskitem GetQPSKitem() + { + qpskitem b; + + lock (myQ.SyncRoot) + { + b = (qpskitem)myQ.Dequeue(); + } + return b; + } + + public qpskitem GetItem() + { + qpskitem b; + + lock (myQ.SyncRoot) + { + b = (qpskitem)myQ.Dequeue(); + } + return b; + } + + public int Count() + { + int result; + + lock (myQ.SyncRoot) + { + result = myQ.Count; + } + return result; + } + + public void Clear() + { + lock (myQ.SyncRoot) + { + myQ.Clear(); + } + } + } + + public class qpskitem + { + public int re; + public int im; + } +} diff --git a/oscardata/oscardata/zip.cs b/oscardata/oscardata/zip.cs new file mode 100755 index 0000000..ddfb888 --- /dev/null +++ b/oscardata/oscardata/zip.cs @@ -0,0 +1,71 @@ +using System; +using System.IO; +using System.IO.Compression; + + +namespace oscardata +{ + public class ZipStorer + { + public String unzipFile(String zipfilename) + { + try + { + using (var zip = ZipFile.Open(zipfilename, ZipArchiveMode.Read)) + { + // create a temporary subfolder (delete if exists) + String pth = statics.addTmpPath("modemzip"); + try + { + Directory.Delete(pth, true); + } + catch { } + Directory.CreateDirectory(pth); + // extract the ZIP into this subfolder + zip.ExtractToDirectory(pth); + + // search file in this folder and move it to: unzipped_RXtempfilename + String[] files = Directory.GetFiles(pth); + foreach (String s in files) + { + if (s[0] != '.') // ignore path entries + { + return s; + } + } + } + } + catch (Exception ex) + { + Console.WriteLine(ex.ToString()); + } + + return null; + } + + // zipfilename ... name of xyz.zip file + // filename ... name of file inside the zip file (with or without path, as required) + // FullPath ... pathname+filename of the file to be included in the zip + // returns: size of zip file + public long zipFile(String zipfilename, String filename, String FullPath) + { + try + { + File.Delete(zipfilename); + } + catch { } + + try + { + using (var zip = ZipFile.Open(zipfilename, ZipArchiveMode.Create)) + { + zip.CreateEntryFromFile(FullPath, filename); + } + + return statics.GetFileSize(zipfilename); + } + catch { } + return 0; + } + } +} diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/.signature.p7s b/oscardata/packages/MathNet.Numerics.4.12.0/.signature.p7s new file mode 100755 index 0000000..0b69026 Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/.signature.p7s differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/MathNet.Numerics.4.12.0.nupkg b/oscardata/packages/MathNet.Numerics.4.12.0/MathNet.Numerics.4.12.0.nupkg new file mode 100755 index 0000000..391b6fb Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/MathNet.Numerics.4.12.0.nupkg differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/icon.png b/oscardata/packages/MathNet.Numerics.4.12.0/icon.png new file mode 100755 index 0000000..7f46a40 Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/icon.png differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.dll new file mode 100755 index 0000000..d1539c1 Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.dll differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.xml new file mode 100755 index 0000000..5f9e8af --- /dev/null +++ b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net40/MathNet.Numerics.xml @@ -0,0 +1,57152 @@ + + + + MathNet.Numerics + + + + + Useful extension methods for Arrays. + + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Enumerative Combinatorics and Counting. + + + + + Count the number of possible variations without repetition. + The order matters and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of distinct variations. + + + + Count the number of possible variations with repetition. + The order matters and each object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of distinct variations with repetition. + + + + Count the number of possible combinations without repetition. + The order does not matter and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of combinations. + + + + Count the number of possible combinations with repetition. + The order does not matter and an object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of combinations with repetition. + + + + Count the number of possible permutations (without repetition). + + Number of (distinguishable) elements in the set. + Maximum number of permutations without repetition. + + + + Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. + Implemented using Fisher-Yates Shuffling. + + An array of length N that contains (in any order) the integers of the interval [0, N). + Number of (distinguishable) elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation, without repetition, from a data array by reordering the provided array in-place. + Implemented using Fisher-Yates Shuffling. The provided data array will be modified. + + The data array to be reordered. The array will be modified by this routine. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation from a data sequence by returning the provided data in random order. + Implemented using Fisher-Yates Shuffling. + + The data elements to be reordered. + The random number generator to use. Optional; the default random source will be used if null. + + + + Generate a random combination, without repetition, by randomly selecting some of N elements. + + Number of elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Generate a random combination, without repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Select a random combination, without repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination, in the original order. + + + + Generates a random combination, with repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + Integer mask array of length N, for each item the number of times it was selected. + + + + Select a random combination, with repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination with repetition, in the original order. + + + + Generate a random variation, without repetition, by randomly selecting k of n elements with order. + Implemented using partial Fisher-Yates Shuffling. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. + Implemented using partial Fisher-Yates Shuffling. + + The data source to choose from. + Number of elements (k) to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation, in random order. + + + + Generate a random variation, with repetition, by randomly selecting k of n elements with order. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation with repetition, in random order. + + + + 32-bit single precision complex numbers class. + + + + The class Complex32 provides all elementary operations + on complex numbers. All the operators +, -, + *, /, ==, != are defined in the + canonical way. Additional complex trigonometric functions + are also provided. Note that the Complex32 structures + has two special constant values and + . + + + + Complex32 x = new Complex32(1f,2f); + Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); + Complex32 z = (x + y) / (x - y); + + + + For mathematical details about complex numbers, please + have a look at the + Wikipedia + + + + + + The real component of the complex number. + + + + + The imaginary component of the complex number. + + + + + Initializes a new instance of the Complex32 structure with the given real + and imaginary parts. + + The value for the real component. + The value for the imaginary component. + + + + Creates a complex number from a point's polar coordinates. + + A complex number. + The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. + The phase, which is the angle from the line to the horizontal axis, measured in radians. + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to one and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to one. + + + + + Returns a new instance + with real and imaginary numbers positive infinite. + + + + + Returns a new instance + with real and imaginary numbers not a number. + + + + + Gets the real component of the complex number. + + The real component of the complex number. + + + + Gets the real imaginary component of the complex number. + + The real imaginary component of the complex number. + + + + Gets the phase or argument of this Complex32. + + + Phase always returns a value bigger than negative Pi and + smaller or equal to Pi. If this Complex32 is zero, the Complex32 + is assumed to be positive real with an argument of zero. + + The phase or argument of this Complex32 + + + + Gets the magnitude (or absolute value) of a complex number. + + Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN + The magnitude of the current instance. + + + + Gets the squared magnitude (or squared absolute value) of a complex number. + + The squared magnitude of the current instance. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex32. + + + + Gets a value indicating whether the Complex32 is zero. + + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + + true if this instance is ; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + + true if this instance is real nonnegative number; otherwise, false. + + + + + Exponential of this Complex32 (exp(x), E^x). + + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex32 (Base E). + + The natural logarithm of this complex number. + + + + Common Logarithm of this Complex32 (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex32 with custom base. + + The logarithm of this complex number. + + + + Raise this Complex32 to the given value. + + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex32 to the inverse of the given value. + + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex32 + + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex32 + + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex32. + + + + + Evaluate all cubic roots of this Complex32. + + + + + Equality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real and imaginary components of the two complex numbers are equal; false otherwise. + + + + Inequality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real or imaginary components of the two complex numbers are not equal; false otherwise. + + + + Unary addition. + + The complex number to operate on. + Returns the same complex number. + + + + Unary minus. + + The complex number to operate on. + The negated value of the . + + + Addition operator. Adds two complex numbers together. + The result of the addition. + One of the complex numbers to add. + The other complex numbers to add. + + + Subtraction operator. Subtracts two complex numbers. + The result of the subtraction. + The complex number to subtract from. + The complex number to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The complex numbers to add. + The float value to add. + + + Subtraction operator. Subtracts float value from a complex value. + The result of the subtraction. + The complex number to subtract from. + The float value to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The float value to add. + The complex numbers to add. + + + Subtraction operator. Subtracts complex value from a float value. + The result of the subtraction. + The float vale to subtract from. + The complex value to subtract. + + + Multiplication operator. Multiplies two complex numbers. + The result of the multiplication. + One of the complex numbers to multiply. + The other complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The float value to multiply. + The complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The complex number to multiply. + The float value to multiply. + + + Division operator. Divides a complex number by another. + Enhanced Smith's algorithm for dividing two complex numbers + + The result of the division. + The dividend. + The divisor. + + + + Helper method for dividing. + + Re first + Im first + Re second + Im second + + + + + Division operator. Divides a float value by a complex number. + Algorithm based on Smith's algorithm + + The result of the division. + The dividend. + The divisor. + + + Division operator. Divides a complex number by a float value. + The result of the division. + The dividend. + The divisor. + + + + Computes the conjugate of a complex number and returns the result. + + + + + Returns the multiplicative inverse of a complex number. + + + + + Converts the value of the current complex number to its equivalent string representation in Cartesian form. + + The string representation of the current instance in Cartesian form. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format for its real and imaginary parts. + + The string representation of the current instance in Cartesian form. + A standard or custom numeric format string. + + is not a valid format string. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified culture-specific formatting information. + + The string representation of the current instance in Cartesian form, as specified by . + An object that supplies culture-specific formatting information. + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. + The string representation of the current instance in Cartesian form, as specified by and . + A standard or custom numeric format string. + An object that supplies culture-specific formatting information. + + is not a valid format string. + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + The hash code for the complex number. + + + The hash code of the complex number. + + + The hash code is calculated as + System.Math.Exp(ComplexMath.Absolute(complexNumber)). + + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as float. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Explicit conversion of a real decimal to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Explicit conversion of a Complex to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Implicit conversion of a real byte to a Complex32. + + The byte value to convert. + The result of the conversion. + + + + Implicit conversion of a real short to a Complex32. + + The short value to convert. + The result of the conversion. + + + + Implicit conversion of a signed byte to a Complex32. + + The signed byte value to convert. + The result of the conversion. + + + + Implicit conversion of a unsigned real short to a Complex32. + + The unsigned short value to convert. + The result of the conversion. + + + + Implicit conversion of a real int to a Complex32. + + The int value to convert. + The result of the conversion. + + + + Implicit conversion of a BigInteger int to a Complex32. + + The BigInteger value to convert. + The result of the conversion. + + + + Implicit conversion of a real long to a Complex32. + + The long value to convert. + The result of the conversion. + + + + Implicit conversion of a real uint to a Complex32. + + The uint value to convert. + The result of the conversion. + + + + Implicit conversion of a real ulong to a Complex32. + + The ulong value to convert. + The result of the conversion. + + + + Implicit conversion of a real float to a Complex32. + + The float value to convert. + The result of the conversion. + + + + Implicit conversion of a real double to a Complex32. + + The double value to convert. + The result of the conversion. + + + + Converts this Complex32 to a . + + A with the same values as this Complex32. + + + + Returns the additive inverse of a specified complex number. + + The result of the real and imaginary components of the value parameter multiplied by -1. + A complex number. + + + + Computes the conjugate of a complex number and returns the result. + + The conjugate of . + A complex number. + + + + Adds two complex numbers and returns the result. + + The sum of and . + The first complex number to add. + The second complex number to add. + + + + Subtracts one complex number from another and returns the result. + + The result of subtracting from . + The value to subtract from (the minuend). + The value to subtract (the subtrahend). + + + + Returns the product of two complex numbers. + + The product of the and parameters. + The first complex number to multiply. + The second complex number to multiply. + + + + Divides one complex number by another and returns the result. + + The quotient of the division. + The complex number to be divided. + The complex number to divide by. + + + + Returns the multiplicative inverse of a complex number. + + The reciprocal of . + A complex number. + + + + Returns the square root of a specified complex number. + + The square root of . + A complex number. + + + + Gets the absolute value (or magnitude) of a complex number. + + The absolute value of . + A complex number. + + + + Returns e raised to the power specified by a complex number. + + The number e raised to the power . + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a complex number. + + The complex number raised to the power . + A complex number to be raised to a power. + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a single-precision floating-point number. + + The complex number raised to the power . + A complex number to be raised to a power. + A single-precision floating-point number that specifies a power. + + + + Returns the natural (base e) logarithm of a specified complex number. + + The natural (base e) logarithm of . + A complex number. + + + + Returns the logarithm of a specified complex number in a specified base. + + The logarithm of in base . + A complex number. + The base of the logarithm. + + + + Returns the base-10 logarithm of a specified complex number. + + The base-10 logarithm of . + A complex number. + + + + Returns the sine of the specified complex number. + + The sine of . + A complex number. + + + + Returns the cosine of the specified complex number. + + The cosine of . + A complex number. + + + + Returns the tangent of the specified complex number. + + The tangent of . + A complex number. + + + + Returns the angle that is the arc sine of the specified complex number. + + The angle which is the arc sine of . + A complex number. + + + + Returns the angle that is the arc cosine of the specified complex number. + + The angle, measured in radians, which is the arc cosine of . + A complex number that represents a cosine. + + + + Returns the angle that is the arc tangent of the specified complex number. + + The angle that is the arc tangent of . + A complex number. + + + + Returns the hyperbolic sine of the specified complex number. + + The hyperbolic sine of . + A complex number. + + + + Returns the hyperbolic cosine of the specified complex number. + + The hyperbolic cosine of . + A complex number. + + + + Returns the hyperbolic tangent of the specified complex number. + + The hyperbolic tangent of . + A complex number. + + + + Extension methods for the Complex type provided by System.Numerics + + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex. + + + + Gets the conjugate of the Complex number. + + The number to perform this operation on. + + The semantic of setting the conjugate is such that + + // a, b of type Complex32 + a.Conjugate = b; + + is equivalent to + + // a, b of type Complex32 + a = b.Conjugate + + + The conjugate of the number. + + + + Returns the multiplicative inverse of a complex number. + + + + + Exponential of this Complex (exp(x), E^x). + + The number to perform this operation on. + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex (Base E). + + The number to perform this operation on. + + The natural logarithm of this complex number. + + + + + Common Logarithm of this Complex (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex with custom base. + + The logarithm of this complex number. + + + + Raise this Complex to the given value. + + The number to perform this operation on. + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex to the inverse of the given value. + + The number to perform this operation on. + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex + + The number to perform this operation on. + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex + + The number to perform this operation on. + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex. + + + + + Evaluate all cubic roots of this Complex. + + + + + Gets a value indicating whether the Complex32 is zero. + + The number to perform this operation on. + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + The number to perform this operation on. + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + The number to perform this operation on. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + The number to perform this operation on. + + true if this instance is NaN; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + The number to perform this operation on. + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + The number to perform this operation on. + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + The number to perform this operation on. + + true if this instance is real nonnegative number; otherwise, false. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + The string to parse. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as double. + + + + + Converts the string representation of a complex number to a double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + A collection of frequently used mathematical constants. + + + + The number e + + + The number log[2](e) + + + The number log[10](e) + + + The number log[e](2) + + + The number log[e](10) + + + The number log[e](pi) + + + The number log[e](2*pi)/2 + + + The number 1/e + + + The number sqrt(e) + + + The number sqrt(2) + + + The number sqrt(3) + + + The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 + + + The number sqrt(3)/2 + + + The number pi + + + The number pi*2 + + + The number pi/2 + + + The number pi*3/2 + + + The number pi/4 + + + The number sqrt(pi) + + + The number sqrt(2pi) + + + The number sqrt(pi/2) + + + The number sqrt(2*pi*e) + + + The number log(sqrt(2*pi)) + + + The number log(sqrt(2*pi*e)) + + + The number log(2 * sqrt(e / pi)) + + + The number 1/pi + + + The number 2/pi + + + The number 1/sqrt(pi) + + + The number 1/sqrt(2pi) + + + The number 2/sqrt(pi) + + + The number 2 * sqrt(e / pi) + + + The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). + + + + + The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). + + + + + The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). + + + The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. + + + The Catalan constant + Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } + + + The Euler-Mascheroni constant + lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } + + + The number (1+sqrt(5))/2, also known as the golden ratio + + + The Glaisher constant + e^(1/12 - Zeta(-1)) + + + The Khinchin constant + prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} + + + + The size of a double in bytes. + + + + + The size of an int in bytes. + + + + + The size of a float in bytes. + + + + + The size of a Complex in bytes. + + + + + The size of a Complex in bytes. + + + + Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) + + + Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) + + + Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) + + + Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) + + + Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) + + + Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) + + + Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) + + + Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) + + + Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) + + + Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) + + + Planck time: t_p = l_p/c_0 [s] (2007 CODATA) + + + Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) + + + Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) + + + Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) + + + Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) + + + Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) + + + Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) + + + Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) + + + Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) + + + Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) + + + Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) + + + Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) + + + Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) + + + Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) + + + Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) + + + Electron Mass: [kg] (2007 CODATA) + + + Electron Mass Energy Equivalent: [J] (2007 CODATA) + + + Electron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Electron Compton Wavelength: [m] (2007 CODATA) + + + Classical Electron Radius: [m] (2007 CODATA) + + + Thomson Cross Section: [m^2] (2002 CODATA) + + + Electron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Electon G-Factor: [1] (2007 CODATA) + + + Muon Mass: [kg] (2007 CODATA) + + + Muon Mass Energy Equivalent: [J] (2007 CODATA) + + + Muon Molar Mass: [kg mol^-1] (2007 CODATA) + + + Muon Compton Wavelength: [m] (2007 CODATA) + + + Muon Magnetic Moment: [J T^-1] (2007 CODATA) + + + Muon G-Factor: [1] (2007 CODATA) + + + Tau Mass: [kg] (2007 CODATA) + + + Tau Mass Energy Equivalent: [J] (2007 CODATA) + + + Tau Molar Mass: [kg mol^-1] (2007 CODATA) + + + Tau Compton Wavelength: [m] (2007 CODATA) + + + Proton Mass: [kg] (2007 CODATA) + + + Proton Mass Energy Equivalent: [J] (2007 CODATA) + + + Proton Molar Mass: [kg mol^-1] (2007 CODATA) + + + Proton Compton Wavelength: [m] (2007 CODATA) + + + Proton Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton G-Factor: [1] (2007 CODATA) + + + Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Neutron Mass: [kg] (2007 CODATA) + + + Neutron Mass Energy Equivalent: [J] (2007 CODATA) + + + Neutron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Neuron Compton Wavelength: [m] (2007 CODATA) + + + Neutron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Neutron G-Factor: [1] (2007 CODATA) + + + Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Deuteron Mass: [kg] (2007 CODATA) + + + Deuteron Mass Energy Equivalent: [J] (2007 CODATA) + + + Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Helion Mass: [kg] (2007 CODATA) + + + Helion Mass Energy Equivalent: [J] (2007 CODATA) + + + Helion Molar Mass: [kg mol^-1] (2007 CODATA) + + + Avogadro constant: [mol^-1] (2010 CODATA) + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 + + + The SI prefix factor corresponding to 1 000 + + + The SI prefix factor corresponding to 100 + + + The SI prefix factor corresponding to 10 + + + The SI prefix factor corresponding to 0.1 + + + The SI prefix factor corresponding to 0.01 + + + The SI prefix factor corresponding to 0.001 + + + The SI prefix factor corresponding to 0.000 001 + + + The SI prefix factor corresponding to 0.000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 + + + + Sets parameters for the library. + + + + + Use a specific provider if configured, e.g. using + environment variables, or fall back to the best providers. + + + + + Use the best provider available. + + + + + Use the Intel MKL native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Use the Intel MKL native provider for linear algebra, with the specified configuration parameters. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the Intel MKL native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Use the Nvidia CUDA native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the Nvidia CUDA native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Use the OpenBLAS native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the OpenBLAS native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Try to use any available native provider in an undefined order. + + + True if one of the native providers was found and successfully initialized. + False if it failed and the previous provider is still active. + + + + + Gets or sets a value indicating whether the distribution classes check validate each parameter. + For the multivariate distributions this could involve an expensive matrix factorization. + The default setting of this property is true. + + + + + Gets or sets a value indicating whether to use thread safe random number generators (RNG). + Thread safe RNG about two and half time slower than non-thread safe RNG. + + + true to use thread safe random number generators ; otherwise, false. + + + + + Optional path to try to load native provider binaries from. + + + + + Gets or sets a value indicating how many parallel worker threads shall be used + when parallelization is applicable. + + Default to the number of processor cores, must be between 1 and 1024 (inclusive). + + + + Gets or sets the TaskScheduler used to schedule the worker tasks. + + + + + Gets or sets the order of the matrix when linear algebra provider + must calculate multiply in parallel threads. + + The order. Default 64, must be at least 3. + + + + Gets or sets the number of elements a vector or matrix + must contain before we multiply threads. + + Number of elements. Default 300, must be at least 3. + + + + Numerical Derivative. + + + + + Initialized a NumericalDerivative with the given points and center. + + + + + Initialized a NumericalDerivative with the default points and center for the given order. + + + + + Evaluates the derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + Derivative order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Univariate function handle. + Derivative order. + + + + Evaluates the first derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the first derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the second derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the second derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + + + + Evaluates the partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + + + + Class to calculate finite difference coefficients using Taylor series expansion method. + + + For n points, coefficients are calculated up to the maximum derivative order possible (n-1). + The current function value position specifies the "center" for surrounding coefficients. + Selecting the first, middle or last positions represent forward, backwards and central difference methods. + + + + + + + Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. + + + + + Initializes a new instance of the class. + + Number of finite difference coefficients. + + + + Gets the finite difference coefficients for a specified center and order. + + Current function position with respect to coefficients. Must be within point range. + Order of finite difference coefficients. + Vector of finite difference coefficients. + + + + Gets the finite difference coefficients for all orders at a specified center. + + Current function position with respect to coefficients. Must be within point range. + Rectangular array of coefficients, with columns specifying order. + + + + Type of finite different step size. + + + + + The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. + + + + + A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however + this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the + function input parameter and not the order of the finite difference derivative. + + + + + A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order + and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a + base step size, h, that is equivalent to scaling. This step size is then scaled according to the function + input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). + + + + + Class to evaluate the numerical derivative of a function using finite difference approximations. + Variable point and center methods can be initialized . + This class can also be used to return function handles (delegates) for a fixed derivative order and variable. + It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. + + + + + Initializes a NumericalDerivative class with the default 3 point center difference method. + + + + + Initialized a NumericalDerivative class. + + Number of points for finite difference derivatives. + Location of the center with respect to other points. Value ranges from zero to points-1. + + + + Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. + If the base step size used in scaling is desired, see . + + + Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a + base-2 representable number to improve finite difference accuracy. + + + + + Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. + However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. + + + + + Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. + By default this is set to machine epsilon, from which is computed. + + + + + Sets and gets the location of the center point for the finite difference derivative. + + + + + Number of times a function is evaluated for numerical derivatives. + + + + + Type of step size for computing finite differences. If set to absolute, dx = h. + If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when + h is approximately equal to the square-root of machine accuracy, epsilon. + + + + + Evaluates the derivative of equidistant points using the finite difference method. + + Vector of points StepSize apart. + Derivative order. + Finite difference step size. + Derivative of points of the specified order. + + + + Evaluates the derivative of a scalar univariate function. + + + Supplying the optional argument currentValue will reduce the number of function evaluations + required to calculate the finite difference derivative. + + Function handle. + Point at which to compute the derivative. + Derivative order. + Current function value at center. + Function derivative at x of the specified order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Input function handle. + Derivative order. + Function handle that evaluates the derivative of input function at a fixed order. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Function partial derivative at x of the specified order. + + + + Evaluates the partial derivatives of a multivariate function array. + + + This function assumes the input vector x is of the correct length for f. + + Multivariate vector function array handle. + Vector at which to evaluate the derivatives. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Vector of functions partial derivatives at x of the specified order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at a fixed order. + + + + Creates a function handle for the partial derivative of a vector multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at fixed order. + + + + Evaluates the mixed partial derivative of variable order for multivariate functions. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function handle. + Points at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivative at x of the specified order. + + + + Evaluates the mixed partial derivative of variable order for multivariate function arrays. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function array handle. + Vector at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivatives at x of the specified order. + + + + Creates a function handle for the mixed partial derivative of a multivariate function. + + Input function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Creates a function handle for the mixed partial derivative of a multivariate vector function. + + Input vector function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Resets the evaluation counter. + + + + + Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Hessian object with a three point central difference method. + + + + + Creates a numerical Hessian with a specified differentiation scheme. + + Number of points for Hessian evaluation. + Center point for differentiation. + + + + Evaluates the Hessian of the scalar univariate function f at points x. + + Scalar univariate function handle. + Point at which to evaluate Hessian. + Hessian tensor. + + + + Evaluates the Hessian of a multivariate function f at points x. + + + This method of computing the Hessian is only valid for Lipschitz continuous functions. + The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. + + Multivariate function handle.> + Points at which to evaluate Hessian.> + Hessian tensor. + + + + Resets the function evaluation counter for the Hessian. + + + + + Class for evaluating the Jacobian of a function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Jacobian object with a three point central difference method. + + + + + Creates a numerical Jacobian with a specified differentiation scheme. + + Number of points for Jacobian evaluation. + Center point for differentiation. + + + + Evaluates the Jacobian of scalar univariate function f at point x. + + Scalar univariate function handle. + Point at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x. + + + This function assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x given a current function value. + + + To minimize the number of function evaluations, a user can supply the current value of the function + to be used in computing the Jacobian. This value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Current function value at finite difference center. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function array f at vector x. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Jacobian matrix. + + + + Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. + + + To minimize the number of function evaluations, a user can supply a vector of current values of the functions + to be used in computing the Jacobian. These value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Vector of current function values. + Jacobian matrix. + + + + Resets the function evaluation counter for the Jacobian. + + + + + Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Double-Exponential integration. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The number of Gauss-Legendre points. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Gauss-Kronrod integration. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the differintegral of order n at x. + + + + Metrics to measure the distance between two structures. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Pearson's distance, i.e. 1 - the person correlation coefficient. + + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Discrete Univariate Bernoulli distribution. + The Bernoulli distribution is a distribution over bits. The parameter + p specifies the probability that a 1 is generated. + Wikipedia - Bernoulli distribution. + + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + If the Bernoulli parameter is not in the range [0,1]. + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + If the Bernoulli parameter is not in the range [0,1]. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Generates one sample from the Bernoulli distribution. + + The random source to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A random sample from the Bernoulli distribution. + + + + Samples a Bernoulli distributed random variable. + + A sample from the Bernoulli distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Continuous Univariate Beta distribution. + For details about this distribution, see + Wikipedia - Beta distribution. + + + There are a few special cases for the parameterization of the Beta distribution. When both + shape parameters are positive infinity, the Beta distribution degenerates to a point distribution + at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point + distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution + degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the + distribution degenerates to a point distribution at the non-zero shape parameter. + + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + A string representation of the Beta distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. + + + + + Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Beta distribution. + + + + + Gets the variance of the Beta distribution. + + + + + Gets the standard deviation of the Beta distribution. + + + + + Gets the entropy of the Beta distribution. + + + + + Gets the skewness of the Beta distribution. + + + + + Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the Beta distribution. + + + + + Gets the minimum of the Beta distribution. + + + + + Gets the maximum of the Beta distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Beta distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Beta distribution. + + a sequence of samples from the distribution. + + + + Samples Beta distributed random variables by sampling two Gamma variables and normalizing. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a random number from the Beta distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Beta-Binomial distribution. + The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising + when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. + The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. + It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. + Wikipedia - Beta-Binomial distribution. + + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a random number from the BetaBinomial distribution. + + + + Samples a BetaBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of BetaBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a BetaBinomial distributed random variable. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Samples an array of BetaBinomial distributed random variables. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + + + + Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast + is used to construct an underlying beta distribution. + + The minimum value. + The maximum value. + The most likely value (mode). + The random number generator which is used to draw random samples. + The Beta distribution derived from the PERT parameters. + + + + A string representation of the distribution. + + A string representation of the BetaScaled distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. + + + + + Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. + + + + + Gets the location (μ) of the BetaScaled distribution. + + + + + Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the BetaScaled distribution. + + + + + Gets the variance of the BetaScaled distribution. + + + + + Gets the standard deviation of the BetaScaled distribution. + + + + + Gets the entropy of the BetaScaled distribution. + + + + + Gets the skewness of the BetaScaled distribution. + + + + + Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the BetaScaled distribution. + + + + + Gets the minimum of the BetaScaled distribution. + + + + + Gets the maximum of the BetaScaled distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Binomial distribution. + For details about this distribution, see + Wikipedia - Binomial distribution. + + + The distribution is parameterized by a probability (between 0.0 and 1.0). + + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + If is not in the interval [0.0,1.0]. + If is negative. + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The random number generator which is used to draw random samples. + If is not in the interval [0.0,1.0]. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + + + + Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. + + + + + Gets the number of trials. Range: n ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the Binomial distribution without doing parameter checking. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successful trials. + + + + Samples a Binomially distributed random variable. + + The number of successes in N trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Binomially distributed random variables. + + a sequence of successes in N trials. + + + + Samples a binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Samples a binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Gets the scale (a) of the distribution. Range: a > 0. + + + + + Gets the first shape parameter (c) of the distribution. Range: c > 0. + + + + + Gets the second shape parameter (k) of the distribution. Range: k > 0. + + + + + Initializes a new instance of the Burr Type XII class. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Burr distribution. + + + + + Gets the variance of the Burr distribution. + + + + + Gets the standard deviation of the Burr distribution. + + + + + Gets the mode of the Burr distribution. + + + + + Gets the minimum of the Burr distribution. + + + + + Gets the maximum of the Burr distribution. + + + + + Gets the entropy of the Burr distribution (currently not supported). + + + + + Gets the skewness of the Burr distribution. + + + + + Gets the median of the Burr distribution. + + + + + Generates a sample from the Burr distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the Burr distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Discrete Univariate Categorical distribution. + For details about this distribution, see + Wikipedia - Categorical distribution. This + distribution is sometimes called the Discrete distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + Support: 0..k where k = length(probability mass array)-1 + + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class from a . The distribution + will not be automatically updated when the histogram changes. The categorical distribution will have + one value for each bucket and a probability for that value proportional to the bucket count. + + The histogram from which to create the categorical variable. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Gets the probability mass vector (non-negative ratios) of the multinomial. + + Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a . + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets he mode of the distribution. + + Throws a . + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the cumulative distribution function. This method performs no parameter checking. + If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + An array representing the unnormalized cumulative distribution function. + + + + Returns one trials from the categorical distribution. + + The random number generator to use. + The (unnormalized) cumulative distribution of the probability distribution. + One sample from the categorical distribution implied by . + + + + Samples a Binomially distributed random variable. + + The number of successful trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of successful trial counts. + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Continuous Univariate Cauchy distribution. + The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see + Wikipedia - Cauchy distribution. + + + + + Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 + + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Gets the location (x0) of the distribution. + + + + + Gets the scale (γ) of the distribution. Range: γ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi distribution. + This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal + components are independent and each follow a standard normal distribution. The length of the vector will + then have a chi distribution. + Wikipedia - Chi distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Chi distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Chi distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi-Squared distribution. + This distribution is a sum of the squares of k independent standard normal random variables. + Wikipedia - ChiSquare distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ChiSquare distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ChiSquare distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + Generates a sample from the ChiSquare distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sample from the ChiSquare distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Continuous Univariate Uniform distribution. + The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see + Wikipedia - Continuous uniform distribution. + + + + + Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. + + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + If the upper bound is smaller than the lower bound. + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + If the upper bound is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Generates a sample from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Discrete Univariate Conway-Maxwell-Poisson distribution. + The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli + distributions. It is parameterized by two real numbers "lambda" and "nu". For + + nu = 0 the distribution reverts to a Geometric distribution + nu = 1 the distribution reverts to the Poisson distribution + nu -> infinity the distribution converges to a Bernoulli distribution + + This implementation will cache the value of the normalization constant. + Wikipedia - ConwayMaxwellPoisson distribution. + + + + + The mean of the distribution. + + + + + The variance of the distribution. + + + + + Caches the value of the normalization constant. + + + + + Since many properties of the distribution can only be computed approximately, the tolerance + level specifies how much error we accept. + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Gets the lambda (λ) parameter. Range: λ > 0. + + + + + Gets the rate of decay (ν) parameter. Range: ν ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the cumulative distribution at location . + + + + + Gets the normalization constant of the Conway-Maxwell-Poisson distribution. + + + + + Computes an approximate normalization constant for the CMP distribution. + + The lambda (λ) parameter for the CMP distribution. + The rate of decay (ν) parameter for the CMP distribution. + + an approximate normalization constant for the CMP distribution. + + + + + Returns one trials from the distribution. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The z parameter. + + One sample from the distribution implied by , , and . + + + + + Samples a Conway-Maxwell-Poisson distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. + + + a sequence of samples from a Conway-Maxwell-Poisson distribution. + + + + + Samples a random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Multivariate Dirichlet distribution. For details about this distribution, see + Wikipedia - Dirichlet distribution. + + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + No parameter can be less than zero and at least one parameter should be larger than zero. + + The parameters of the Dirichlet distribution. + + + + Gets or sets the parameters of the Dirichlet distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the dimension of the Dirichlet distribution. + + + + + Gets the sum of the Dirichlet parameters. + + + + + Gets the mean of the Dirichlet distribution. + + + + + Gets the variance of the Dirichlet distribution. + + + + + Gets the entropy of the distribution. + + + + + Computes the density of the distribution. + + The locations at which to compute the density. + the density at . + The Dirichlet distribution requires that the sum of the components of x equals 1. + You can also leave out the last component, and it will be computed from the others. + + + + Computes the log density of the distribution. + + The locations at which to compute the density. + the density at . + + + + Samples a Dirichlet distributed random vector. + + A sample from this distribution. + + + + Samples a Dirichlet distributed random vector. + + The random number generator to use. + The Dirichlet distribution parameter. + a sample from the distribution. + + + + Discrete Univariate Uniform distribution. + The discrete uniform distribution is a distribution over integers. The distribution + is parameterized by a lower and upper bound (both inclusive). + Wikipedia - Discrete uniform distribution. + + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Gets the inclusive lower bound of the probability distribution. + + + + + Gets the inclusive upper bound of the probability distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. + + The random source to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A random sample from the discrete uniform distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of uniformly distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a uniformly distributed random variable. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Samples a uniformly distributed random variable. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Continuous Univariate Erlang distribution. + This distribution is a continuous probability distribution with wide applicability primarily due to its + relation to the exponential and Gamma distributions. + Wikipedia - Erlang distribution. + + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Erlang distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The scale (μ) of the Erlang distribution. Range: μ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. + + + + + Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + + Gets the scale of the Erlang distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum value. + + + + + Gets the Maximum value. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Erlang distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Erlang distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Exponential distribution. + The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. + Wikipedia - exponential distribution. + + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Exponential distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Draws a random sample from the distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. + For details about this distribution, see + Wikipedia - FisherSnedecor distribution. + + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. + + + + + Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the FisherSnedecor distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the FisherSnedecor distribution. + + a sequence of samples from the distribution. + + + + Generates one sample from the FisherSnedecor distribution without parameter checking. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a FisherSnedecor distributed random number. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Gamma distribution. + For details about this distribution, see + Wikipedia - Gamma distribution. + + + The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want + to specify a Gamma distribution which is a point distribution we set the shape parameter to be the + location of the point distribution and the inverse scale as positive infinity. The distribution + with shape and inverse scale both zero is undefined. + + Random number generation for the Gamma distribution is based on the algorithm in: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Gamma distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Gamma distribution. Range: k ≥ 0. + The scale (θ) of the Gamma distribution. Range: θ ≥ 0 + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. + + + + + Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + + Gets or sets the scale (θ) of the Gamma distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Gamma distribution. + + + + + Gets the variance of the Gamma distribution. + + + + + Gets the standard deviation of the Gamma distribution. + + + + + Gets the entropy of the Gamma distribution. + + + + + Gets the skewness of the Gamma distribution. + + + + + Gets the mode of the Gamma distribution. + + + + + Gets the median of the Gamma distribution. + + + + + Gets the minimum of the Gamma distribution. + + + + + Gets the maximum of the Gamma distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Gamma distribution. + + a sequence of samples from the distribution. + + + + Sampling implementation based on: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + This method performs no parameter checks. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + A sample from a Gamma distributed random variable. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Geometric distribution. + The Geometric distribution is a distribution over positive integers parameterized by one positive real number. + This implementation of the Geometric distribution will never generate 0's. + Wikipedia - geometric distribution. + + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a not supported exception. + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Returns one sample from the distribution. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + One sample from the distribution implied by . + + + + Samples a Geometric distributed random variable. + + A sample from the Geometric distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Geometric distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Discrete Univariate Hypergeometric distribution. + This distribution is a discrete probability distribution that describes the number of successes in a sequence + of n draws from a finite population without replacement, just as the binomial distribution + describes the number of successes for draws with replacement + Wikipedia - Hypergeometric distribution. + + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the size of the population (N). + + + + + Gets the number of draws without replacement (n). + + + + + Gets the number successes within the population (K, M). + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the cumulative distribution at location . + + + + + Generates a sample from the Hypergeometric distribution without doing parameter checking. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The n parameter of the distribution. + a random number from the Hypergeometric distribution. + + + + Samples a Hypergeometric distributed random variable. + + The number of successes in n trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Hypergeometric distributed random variables. + + a sequence of successes in n trials. + + + + Samples a random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Continuous Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by a double. + + + + + Gets the largest element in the domain of the distribution which can be represented by a double. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Discrete Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by an integer. + + + + + Gets the largest element in the domain of the distribution which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Probability Distribution. + + + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Continuous Univariate Inverse Gamma distribution. + The inverse Gamma distribution is a distribution over the positive real numbers parameterized by + two positive parameters. + Wikipedia - InverseGamma distribution. + + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Gets or sets the shape (α) parameter. Range: α > 0. + + + + + Gets or sets The scale (β) parameter. Range: β > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Throws . + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Gets the mean (μ) of the distribution. Range: μ > 0. + + + + + Gets the shape (λ) of the distribution. Range: λ > 0. + + + + + Initializes a new instance of the InverseGaussian class. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Inverse Gaussian distribution. + + + + + Gets the variance of the Inverse Gaussian distribution. + + + + + Gets the standard deviation of the Inverse Gaussian distribution. + + + + + Gets the median of the Inverse Gaussian distribution. + No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. + + + + + Gets the minimum of the Inverse Gaussian distribution. + + + + + Gets the maximum of the Inverse Gaussian distribution. + + + + + Gets the skewness of the Inverse Gaussian distribution. + + + + + Gets the kurtosis of the Inverse Gaussian distribution. + + + + + Gets the mode of the Inverse Gaussian distribution. + + + + + Gets the entropy of the Inverse Gaussian distribution (currently not supported). + + + + + Generates a sample from the inverse Gaussian distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the inverse Gaussian distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the inverse Gaussian distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + An Inverse Gaussian distribution. + + + + Multivariate Inverse Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution + is the conjugate prior for the covariance matrix of a multivariate normal distribution. + Wikipedia - Inverse-Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. + + + + + Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. + + + + Gets the variance of the distribution. + + The variance of the distribution. + Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. + + + + Evaluates the probability density function for the inverse Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + a sample from the distribution. + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + The random number generator to use. + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + a sample from the distribution. + + + + Univariate Probability Distribution. + + + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Continuous Univariate Laplace distribution. + The Laplace distribution is a distribution over the real numbers parameterized by a mean and + scale parameter. The PDF is: + p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. + Wikipedia - Laplace distribution. + + + + + Initializes a new instance of the class (location = 0, scale = 1). + + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + If is negative. + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + + + + Gets the location (μ) of the Laplace distribution. + + + + + Gets the scale (b) of the Laplace distribution. Range: b > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples a Laplace distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sample from the Laplace distribution. + + a sample from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Log-Normal distribution. + For details about this distribution, see + Wikipedia - Log-Normal distribution. + + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the logarithm of the distribution. + The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a log-normal distribution with the desired mu and sigma parameters. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Constructs a log-normal distribution with the desired mean and variance. + + The mean of the log-normal distribution. + The variance of the log-normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Estimates the log-normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + MATLAB: lognfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + + + + Gets the log-scale (μ) (mean of the logarithm) of the distribution. + + + + + Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mu of the log-normal distribution. + + + + + Gets the variance of the log-normal distribution. + + + + + Gets the standard deviation of the log-normal distribution. + + + + + Gets the entropy of the log-normal distribution. + + + + + Gets the skewness of the log-normal distribution. + + + + + Gets the mode of the log-normal distribution. + + + + + Gets the median of the log-normal distribution. + + + + + Gets the minimum of the log-normal distribution. + + + + + Gets the maximum of the log-normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the density at . + + MATLAB: lognpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: logncdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: logninv + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Multivariate Matrix-valued Normal distributions. The distribution + is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix + for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. + Wikipedia - MatrixNormal distribution. + + + + + The mean of the matrix normal distribution. + + + + + The covariance matrix for the rows. + + + + + The covariance matrix for the columns. + + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + The random number generator which is used to draw random samples. + If the dimensions of the mean and two covariance matrices don't match. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + + + + Gets the mean. (M) + + The mean of the distribution. + + + + Gets the row covariance. (V) + + The row covariance. + + + + Gets the column covariance. (K) + + The column covariance. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Evaluates the probability density function for the matrix normal distribution. + + The matrix at which to evaluate the density at. + the density at + If the argument does not have the correct dimensions. + + + + Samples a matrix normal distributed random variable. + + A random number from this distribution. + + + + Samples a matrix normal distributed random variable. + + The random number generator to use. + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + a sequence of samples from the distribution. + + + + Samples a vector normal distributed random variable. + + The random number generator to use. + The mean of the vector normal distribution. + The covariance matrix of the vector normal distribution. + a sequence of samples from defined distribution. + + + + Multivariate Multinomial distribution. For details about this distribution, see + Wikipedia - Multinomial distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + + + Stores the normalized multinomial probabilities. + + + + + The number of trials. + + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class from histogram . The distribution will + not be automatically updated when the histogram changes. + + Histogram instance + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative returns false, + if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. + + + + Gets the proportion of ratios. + + + + + Gets the number of trials. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Computes values of the probability mass function. + + Non-negative integers x1, ..., xk + The probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Computes values of the log probability mass function. + + Non-negative integers x1, ..., xk + The log probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Samples one multinomial distributed random variable. + + the counts for each of the different possible values. + + + + Samples a sequence multinomially distributed random variables. + + a sequence of counts for each of the different possible values. + + + + Samples one multinomial distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + the counts for each of the different possible values. + + + + Samples a multinomially distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of variables needed. + a sequence of counts for each of the different possible values. + + + + Discrete Univariate Negative Binomial distribution. + The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special + case that r is an integer one can interpret the distribution as the number of failures before the r'th success + when the probability of success is p. + Wikipedia - NegativeBinomial distribution. + + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Gets the number of successes. Range: r ≥ 0. + + + + + Gets the probability of success. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Samples a negative binomial distributed random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + a sample from the distribution. + + + + Samples a NegativeBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of NegativeBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Continuous Univariate Normal distribution, also known as Gaussian distribution. + For details about this distribution, see + Wikipedia - Normal distribution. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a normal distribution from a mean and standard deviation. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + a normal distribution. + + + + Constructs a normal distribution from a mean and variance. + + The mean (μ) of the normal distribution. + The variance (σ^2) of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Constructs a normal distribution from a mean and precision. + + The mean (μ) of the normal distribution. + The precision of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Estimates the normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + MATLAB: normfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Gets the mean (μ) of the normal distribution. + + + + + Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + + Gets the variance of the normal distribution. + + + + + Gets the precision of the normal distribution. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the entropy of the normal distribution. + + + + + Gets the skewness of the normal distribution. + + + + + Gets the mode of the normal distribution. + + + + + Gets the median of the normal distribution. + + + + + Gets the minimum of the normal distribution. + + + + + Gets the maximum of the normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the density at . + + MATLAB: normpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: normcdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: norminv + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + This structure represents the type over which the distribution + is defined. + + + + + Initializes a new instance of the struct. + + The mean of the pair. + The precision of the pair. + + + + Gets or sets the mean of the pair. + + + + + Gets or sets the precision of the pair. + + + + + Multivariate Normal-Gamma Distribution. + The distribution is the conjugate prior distribution for the + distribution. It specifies a prior over the mean and precision of the distribution. + It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the + precision inverse scale. + The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). + The following degenerate cases are special: when the precision is known, + the precision shape will encode the value of the precision while the precision inverse scale is positive + infinity. When the mean is known, the mean location will encode the value of the mean while the scale + will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. + Wikipedia - Normal-Gamma distribution. + + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Gets the location of the mean. + + + + + Gets the scale of the mean. + + + + + Gets the shape of the precision. + + + + + Gets the inverse scale of the precision. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Returns the marginal distribution for the mean of the NormalGamma distribution. + + the marginal distribution for the mean of the NormalGamma distribution. + + + + Returns the marginal distribution for the precision of the distribution. + + The marginal distribution for the precision of the distribution/ + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the variance of the distribution. + + The mean of the distribution. + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + Density value + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + Density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + The log of the density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + The log of the density value + + + + Generates a sample from the NormalGamma distribution. + + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + a sequence of samples from the distribution. + + + + Generates a sample from the NormalGamma distribution. + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sequence of samples from the distribution. + + + + Continuous Univariate Pareto distribution. + The Pareto distribution is a power law probability distribution that coincides with social, + scientific, geophysical, actuarial, and many other types of observable phenomena. + For details about this distribution, see + Wikipedia - Pareto distribution. + + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + If or are negative. + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The random number generator which is used to draw random samples. + If or are negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Pareto distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Poisson distribution. + + + Distribution is described at Wikipedia - Poisson distribution. + Knuth's method is used to generate Poisson distributed random variables. + f(x) = exp(-λ)*λ^x/x!; + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + If is equal or less then 0.0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + If is equal or less then 0.0. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + + + + Gets the Poisson distribution parameter λ. Range: λ > 0. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Generates one sample from the Poisson distribution. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by Knuth's method. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by "Rejection method PA". + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, + Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) + The article is on pages 29-35. The algorithm given here is on page 32. + + + + Samples a Poisson distributed random variable. + + A sample from the Poisson distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Poisson distributed random variables. + + a sequence of successes in N trials. + + + + Samples a Poisson distributed random variable. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Samples a Poisson distributed random variable. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Rayleigh distribution. + The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an + example of how it arises, the wind speed will have a Rayleigh distribution if the components of + the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. + For details about this distribution, see + Wikipedia - Rayleigh distribution. + + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + If is negative. + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the scale (σ) of the distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Rayleigh distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (σ) of the distribution. Range: σ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (σ) of the distribution. Range: σ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized Error Distribution (SGED). + Implements the univariate SSkewed Generalized Error Distribution. For details about this + distribution, see + + Wikipedia - Generalized Error Distribution. + It includes Laplace, Normal and Student-t distributions. + This is the distribution with q=Inf. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution + with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Generates a sample from the Skew Generalized Error distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized Error distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized T-distribution. + Implements the univariate Skewed Generalized t-distribution. For details about this + distribution, see + + Wikipedia - Skewed generalized t-distribution. + The skewed generalized t-distribution contains many different distributions within it + as special cases based on the parameterization chosen. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution + with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Given a parameter set, returns the distribution that matches this parameterization. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + Null if no known distribution matches the parameterization, else the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the inverse cumulative density at . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Skew Generalized t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Stable distribution. + A random variable is said to be stable (or to have a stable distribution) if it has + the property that a linear combination of two independent copies of the variable has + the same distribution, up to location and scale parameters. + For details about this distribution, see + Wikipedia - Stable distribution. + + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. + + + + + Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + + + + + Gets the scale (c) of the distribution. Range: c > 0. + + + + + Gets the location (μ) of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets he entropy of the distribution. + + Always throws a not supported exception. + + + + Gets the skewness of the distribution. + + Throws a not supported exception of Alpha != 2. + + + + Gets the mode of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the median of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) + + + + Samples the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a random number from the distribution. + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Stable distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Continuous Univariate Student's T-distribution. + Implements the univariate Student t-distribution. For details about this + distribution, see + + Wikipedia - Student's t-distribution. + + We use a slightly generalized version (compared to + Wikipedia) of the Student t-distribution. Namely, one which also + parameterizes the location and scale. See the book "Bayesian Data + Analysis" by Gelman et al. for more details. + The density of the Student t-distribution p(x|mu,scale,dof) = + Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / + (Gamma(dof/2)*Sqrt(dof*pi*scale)). + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. This might involve heavy + computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 + scale 1.0 and degrees of freedom 1. + + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Gets the location (μ) of the Student t-distribution. + + + + + Gets the scale (σ) of the Student t-distribution. Range: σ > 0. + + + + + Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Student t-distribution. + + + + + Gets the variance of the Student t-distribution. + + + + + Gets the standard deviation of the Student t-distribution. + + + + + Gets the entropy of the Student t-distribution. + + + + + Gets the skewness of the Student t-distribution. + + + + + Gets the mode of the Student t-distribution. + + + + + Gets the median of the Student t-distribution. + + + + + Gets the minimum of the Student t-distribution. + + + + + Gets the maximum of the Student t-distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Samples student-t distributed random variables. + + The algorithm is method 2 in section 5, chapter 9 + in L. Devroye's "Non-Uniform Random Variate Generation" + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a random number from the standard student-t distribution. + + + + Generates a sample from the Student t-distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Student t-distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Student t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Student t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Triangular distribution. + For details, see Wikipedia - Triangular distribution. + + The distribution will use the by default. + Users can get/set the random number generator by using the property. + The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The random number generator which is used to draw random samples. + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets or sets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Triangular distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Generates a sample from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Initializes a new instance of the TruncatedPareto class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The random number generator which is used to draw random samples. + If or are non-positive or if T ≤ xm. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets the truncation (T) of the distribution. Range: T > 0. + + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Gets the mean of the truncated Pareto distribution. + + + + + Gets the variance of the truncated Pareto distribution. + + + + + Gets the standard deviation of the truncated Pareto distribution. + + + + + Gets the mode of the truncated Pareto distribution (not supported). + + + + + Gets the minimum of the truncated Pareto distribution. + + + + + Gets the maximum of the truncated Pareto distribution. + + + + + Gets the entropy of the truncated Pareto distribution (not supported). + + + + + Gets the skewness of the truncated Pareto distribution. + + + + + Gets the median of the truncated Pareto distribution. + + + + + Generates a sample from the truncated Pareto distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Continuous Univariate Weibull distribution. + For details about this distribution, see + Wikipedia - Weibull distribution. + + + The Weibull distribution is parametrized by a shape and scale parameter. + + + + + Reusable intermediate result 1 / (_scale ^ _shape) + + + By caching this parameter we can get slightly better numerics precision + in certain constellations without any additional computations. + + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Gets the shape (k) of the Weibull distribution. Range: k > 0. + + + + + Gets the scale (λ) of the Weibull distribution. Range: λ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Weibull distribution. + + + + + Gets the variance of the Weibull distribution. + + + + + Gets the standard deviation of the Weibull distribution. + + + + + Gets the entropy of the Weibull distribution. + + + + + Gets the skewness of the Weibull distribution. + + + + + Gets the mode of the Weibull distribution. + + + + + Gets the median of the Weibull distribution. + + + + + Gets the minimum of the Weibull distribution. + + + + + Gets the maximum of the Weibull distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Generates a sample from the Weibull distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Weibull distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos + + + + Returns a Weibull distribution. + + + + Generates a sample from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Multivariate Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution + is the conjugate prior for the precision (inverse covariance) matrix of the multivariate + normal distribution. + Wikipedia - Wishart distribution. + + + + + The degrees of freedom for the Wishart distribution. + + + + + The scale matrix for the Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The random number generator which is used to draw random samples. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Gets or sets the degrees of freedom (n) for the Wishart distribution. + + + + + Gets or sets the scale matrix (V) for the Wishart distribution. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + + + + Gets the variance of the distribution. + + The variance of the distribution. + + + + Evaluates the probability density function for the Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + A random number from this distribution. + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The cholesky decomposition to use. + a random number from the distribution. + + + + Discrete Univariate Zipf distribution. + Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact + that many types of data studied in the physical and social sciences can be approximated with + a Zipfian distribution, one of a family of related discrete power law probability distributions. + For details about this distribution, see + Wikipedia - Zipf distribution. + + + + + The s parameter of the distribution. + + + + + The n parameter of the distribution. + + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Gets or sets the s parameter of the distribution. + + + + + Gets or sets the n parameter of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The s parameter of the distribution. + The n parameter of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the Zipf distribution without doing parameter checking. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + a random number from the Zipf distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of zipf distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Integer number theory functions. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Find out whether the provided 32 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 64 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 32 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 64 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 32 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 64 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Raises 2 to the provided integer exponent (0 <= exponent < 31). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Raises 2 to the provided integer exponent (0 <= exponent < 63). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Evaluate the binary logarithm of an integer number. + + Two-step method using a De Bruijn-like sequence table lookup. + + + + Find the closest perfect power of two that is larger or equal to the provided + 32 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Find the closest perfect power of two that is larger or equal to the provided + 64 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's + algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the greatest common divisor (gcd) of two big integers. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two big integers. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Collection of functions equivalent to those provided by Microsoft Excel + but backed instead by Math.NET Numerics. + We do not recommend to use them except in an intermediate phase when + porting over solutions previously implemented in Excel. + + + + + An algorithm failed to converge. + + + + + An algorithm failed to converge due to a numerical breakdown. + + + + + An error occurred calling native provider function. + + + + + An error occurred calling native provider function. + + + + + Native provider was unable to allocate sufficient memory. + + + + + Native provider failed LU inversion do to a singular U matrix. + + + + + Compound Monthly Return or Geometric Return or Annualized Return + + + + + Average Gain or Gain Mean + This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) + and then dividing the total by the number of gain periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Average Loss or LossMean + This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) + and then dividing the total by the number of loss periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain + and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. + © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. + + + + + Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then + measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. + + http://www.offshore-library.com/kb/statistics.php + + + + This measure is similar to the loss standard deviation except the downside deviation + considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. + For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below + 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for + the losing periods, and then measure the variation between each losing return and the losing return average). + + + + + A measure of volatility in returns below the mean. It's similar to standard deviation, but it only + looks at periods where the investment return was less than average return. + + + + + Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing + period. Periods can be monthly or quarterly depending on the data frequency. + + + + + Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + The missing gradient is evaluated numerically (forward difference). + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + + Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" + The roots of the polynomial + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The polynomial. + The roots of the polynomial + + + + Find all roots of the Chebychev polynomial of the first kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) + + + + Find all roots of the Chebychev polynomial of the second kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) + + + + Least-Squares Curve Fitting Routines + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as [a, b] array, + where a is the intercept and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning its best fitting parameters as (a, r) tuple. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning a function y' for the best fitting polynomial. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning a function y' for the best fitting combination. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning its best fitting parameter p. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning its best fitting parameter p0 and p1. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning its best fitting parameter p0, p1 and p2. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning a function y' for the best fitting curve. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate a linearly spaced sample vector of the given length between the specified values (inclusive). + Equivalent to MATLAB linspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). + + + + + Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). + Equivalent to MATLAB logspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + + + + + Create a periodic wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic wave. + + The number of samples to generate. + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a Sine wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite Sine wave sequence. + + Samples per unit. + Frequency in samples per unit. + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic square wave, starting with the high phase. + + The number of samples to generate. + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create an infinite periodic square wave sequence, starting with the high phase. + + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create a periodic triangle wave, starting with the raise phase from the lowest sample. + + The number of samples to generate. + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. + + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create a periodic sawtooth wave, starting with the lowest sample. + + The number of samples to generate. + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. + + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an array with each field set to the same value. + + The number of samples to generate. + The value that each field should be set to. + + + + Create an infinite sequence where each element has the same value. + + The value that each element should be set to. + + + + Create a Heaviside Step sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. + + + + Create an infinite Heaviside Step sample sequence. + + The maximal reached peak. + Offset to the time axis. + + + + Create a Kronecker Delta impulse sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + The maximal reached peak. + Offset to the time axis, hence the sample index of the impulse. + + + + Create a periodic Kronecker Delta impulse sample vector. + + The number of samples to generate. + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Generate samples generated by the given computation. + + + + + Generate an infinite sequence generated by the given computation. + + + + + Generate a Fibonacci sequence, including zero as first value. + + + + + Generate an infinite Fibonacci sequence, including zero as first value. + + + + + Create random samples, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create an infinite random sample sequence, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create samples with independent amplitudes of standard distribution. + + + + + Create an infinite sample sequence with independent amplitudes of standard distribution. + + + + + Create samples with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Generate samples by sampling a function at samples from a probability distribution. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution. + + + + + Globalized String Handling Helpers + + + + + Tries to get a from the format provider, + returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format + provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Globalized Parsing: Tokenize a node by splitting it into several nodes. + + Node that contains the trimmed string to be tokenized. + List of keywords to tokenize by. + keywords to skip looking for (because they've already been handled). + + + + Globalized Parsing: Parse a double number + + First token of the number. + Culture Info. + The parsed double number using the given culture information. + + + + + Globalized Parsing: Parse a float number + + First token of the number. + Culture Info. + The parsed float number using the given culture information. + + + + + Calculates r^2, the square of the sample correlation coefficient between + the observed outcomes and the observed predictor values. + Not to be confused with R^2, the coefficient of determination, see . + + The modelled/predicted values + The observed/actual values + Squared Person product-momentum correlation coefficient. + + + + Calculates r, the sample correlation coefficient between the observed outcomes + and the observed predictor values. + + The modelled/predicted values + The observed/actual values + Person product-momentum correlation coefficient. + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The Standard Error of the regression + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The degrees of freedom by which the + number of samples is reduced for performing the Standard Error calculation + The Standard Error of the regression + + + + Calculates the R-Squared value, also known as coefficient of determination, + given some modelled and observed values. + + The values expected from the model. + The actual values obtained. + Coefficient of determination. + + + + Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). + + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed from the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Generate the frequencies corresponding to each index in frequency space. + The frequency space has a resolution of sampleRate/N. + Index 0 corresponds to the DC part, the following indices correspond to + the positive frequencies up to the Nyquist frequency (sampleRate/2), + followed by the negative frequencies wrapped around. + + Number of samples. + The sampling rate of the time-space data. + + + + Fourier Transform Convention + + + + + Inverse integrand exponent (forward: positive sign; inverse: negative sign). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling and common exponent (used in Maple). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] + + + + + Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] + + + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + + + Naive forward DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Hartley Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DHT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Hartley Transform Convention Options. + Corresponding time-space vector. + + + + Rescale FFT-the resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Rescale the iFFT-resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Naive generic DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Corresponding frequency-space vector. + + + + Hartley Transform Convention + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling. + + + + + Numerical Integration (Quadrature). + + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Numerical Contour Integration of a complex-valued function over a real variable,. + + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Analytic integration algorithm for smooth functions with no discontinuities + or derivative discontinuities and no poles inside the interval. + + + + + Maximum number of iterations, until the asked + maximum error is (likely to be) satisfied. + + + + + Approximate the integral by the double exponential transformation + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximate the integral by the double exponential transformation + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Compute the abscissa vector for a single level. + + The level to evaluate the abscissa vector for. + Abscissa Vector. + + + + Compute the weight vector for a single level. + + The level to evaluate the weight vector for. + Weight Vector. + + + + Precomputed abscissa vector per level. + + + + + Precomputed weight vector per level. + + + + + Getter for the order. + + + + + Getter that returns a clone of the array containing the Kronrod abscissas. + + + + + Getter that returns a clone of the array containing the Kronrod weights. + + + + + Getter that returns a clone of the array containing the Gauss weights. + + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth function to integrate + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth complex function to integrate, defined on the real axis. + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + + Initializes a new instance of the class. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + Gettter for the ith abscissa. + + Index of the ith abscissa. + The ith abscissa. + + + + Getter that returns a clone of the array containing the abscissas. + + + + + Getter for the ith weight. + + Index of the ith weight. + The ith weight. + + + + Getter that returns a clone of the array containing the weights. + + + + + Getter for the order. + + + + + Getter for the InvervalBegin. + + + + + Getter for the InvervalEnd. + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth function to integrate. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + Contains a method to compute the Gauss-Kronrod abscissas/weights. + + + + + Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + + + Computes the Gauss-Kronrod abscissas/weights and Gauss weights. + + Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. + Object containing the non-negative abscissas/weights, order. + + + + Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. + + + + + Return value and derivative of a Legendre series at given points. + + + + + Return value and derivative of a Legendre polynomial of order at given points. + + + + + Creates a Gauss-Kronrod point. + + + + + Getter for the GaussKronrodPoint. + + Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, and order. + + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Computes the Gauss-Legendre abscissas/weights. + See Pavel Holoborodko for a description of the algorithm. + + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. 1e-10 is usually fine. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Creates and maps a Gauss-Legendre point. + + + + + Getter for the GaussPoint. + + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Getter for the GaussPoint. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Contains the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + + Contains two GaussPoint. + + + + + Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. + + + Wikipedia - Trapezium Rule + + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, define don real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation algorithm for definite integrals by Simpson's rule. + + + + + Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Even number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Interpolation Factory. + + + + + Creates an interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Bulirsch Stoer rational interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a barycentric polynomial interpolation where the given sample points are equidistant. + + The sample points t, must be equidistant. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted + instead, which is more efficient. + + + + + Create a Neville polynomial interpolation based on arbitrary points. + If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. + Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a piecewise linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted + instead, which is more efficient. + + + + + Create piecewise log-linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted + instead, which is more efficient. + + + + + Create an piecewise natural cubic spline interpolation based on arbitrary points, + with zero secondary derivatives at the boundaries. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted + instead, which is more efficient. + + + + + Create an piecewise cubic Akima spline interpolation based on arbitrary points. + Akima splines are robust to outliers. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted + instead, which is more efficient. + + + + + Create a piecewise cubic Hermite spline interpolation based on arbitrary points + and their slopes/first derivative. + + The sample points t. + The sample point values x(t). + The slope at the sample points. Optimized for arrays. + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted + instead, which is more efficient. + + + + + Create a step-interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Barycentric Interpolation Algorithm. + + Supports neither differentiation nor integration. + + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + Barycentric weights (N), sorted ascendingly by x. + + + + Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. + + + + + Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. + + + + + Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. + + + + This algorithm supports neither differentiation nor integration. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Cubic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + third order spline coefficients (N) + + + + Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + + + + + Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + Akima splines are robust to outliers. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + + + + + Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, + and custom boundary/termination conditions. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + + + + + Create a natural cubic spline interpolation from a set of (x,y) value pairs + and zero second derivatives at the two boundaries, sorted ascendingly by x. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + + + + + Three-Point Differentiation Helper. + + Sample Points t. + Sample Values x(t). + Index of the point of the differentiation. + Index of the first sample. + Index of the second sample. + Index of the third sample. + The derivative approximation. + + + + Tridiagonal Solve Helper. + + The a-vector[n]. + The b-vector[n], will be modified by this function. + The c-vector[n]. + The d-vector[n], will be modified by this function. + The x-vector[n] + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Interpolation within the range of a discrete set of known data points. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Piece-wise Linear Interpolation. + + Supports both differentiation and integration. + + + Sample points (N+1), sorted ascending + Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients + Slopes (N) at the sample points (first order coefficients): N + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Piece-wise Log-Linear Interpolation + + This algorithm supports differentiation, not integration. + + + + Internal Spline Interpolation + + + + Sample points (N), sorted ascending + Natural logarithm of the sample values (N) at the corresponding points + + + + Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Lagrange Polynomial Interpolation using Neville's Algorithm. + + + + This algorithm supports differentiation, but doesn't support integration. + + + When working with equidistant or Chebyshev sample points it is + recommended to use the barycentric algorithms specialized for + these cases instead of this arbitrary Neville algorithm. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Quadratic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Left and right boundary conditions. + + + + + Natural Boundary (Zero second derivative). + + + + + Parabolically Terminated boundary. + + + + + Fixed first derivative at the boundary. + + + + + Fixed second derivative at the boundary. + + + + + A step function where the start of each segment is included, and the last segment is open-ended. + Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. + The domain of the function is all real numbers, such that y = 0 where x <. + + Supports both differentiation and integration. + + + Sample points (N), sorted ascending + Samples values (N) of each segment starting at the corresponding sample point. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t. + + + + + Wraps an interpolation with a transformation of the interpolated values. + + Neither differentiation nor integration is supported. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use, + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a double dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + Matrix V is encoded in the property EigenVectors in the way that: + - column corresponding to real eigenvalue represents real eigenvector, + - columns corresponding to the pair of complex conjugate eigenvalues + lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + double version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + double version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiply this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply this one by. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a float dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + float version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a float sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + float version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the Frobenius norm of this matrix. + The Frobenius norm of this matrix. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex value z1 + Complex value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex32 value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex32 value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex32 value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex32 dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex32 value z1 + Complex32 value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + Supported data types are double, single, , and . + + + + Gets the lower triangular form of the Cholesky matrix. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + Supported data types are double, single, , and . + + + + Gets or sets a value indicating whether matrix is symmetric or not + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Gets or sets the eigen values (λ) of matrix in ascending value. + + + + + Gets or sets eigenvectors. + + + + + Gets or sets the block diagonal eigenvalue matrix. + + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + Supported data types are double, single, , and . + + + + Classes that solves a system of linear equations, AX = B. + + Supported data types are double, single, , and . + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, Ax = b + + The right hand side vector, b. + The left hand side Vector, x. + + + + Solves a system of linear equations, Ax = b. + + The right hand side vector, b. + The left hand side Matrix>, x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + Supported data types are double, single, , and . + + + + Gets the lower triangular factor. + + + + + Gets the upper triangular factor. + + + + + Gets the permutation applied to LU factorization. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + The type of QR factorization go perform. + + + + + Compute the full QR factorization of a matrix. + + + + + Compute the thin QR factorization of a matrix. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + Supported data types are double, single, , and . + + + + Gets or sets orthogonal Q matrix + + + + + Gets the upper triangular factor R. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + Supported data types are double, single, , and . + + + Indicating whether U and VT matrices have been computed during SVD factorization. + + + + Gets the singular values (Σ) of matrix in ascending value. + + + + + Gets the left singular vectors (U - m-by-m unitary matrix) + + + + + Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) + + + + + Returns the singular values as a diagonal . + + The singular values as a diagonal . + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + Supported data types are double, single, , and . + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + + + + The value of 1.0. + + + + + The value of 0.0. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar denominator to use. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar numerator to use. + The matrix to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. + + The exponent matrix to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Adds a scalar to each element of the matrix. + + The scalar to add. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds a scalar to each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix. + + The scalar to subtract. + A new matrix containing the subtraction of this matrix and the scalar. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts each element of the matrix from a scalar. + + The scalar to subtract from. + A new matrix containing the subtraction of the scalar and this matrix. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of this matrix with a scalar. + + The scalar to multiply with. + The result of the multiplication. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides each element of this matrix with a scalar. + + The scalar to divide with. + The result of the division. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides a scalar by each element of the matrix. + + The scalar to divide. + The result of the division. + + + + Divides a scalar by each element of the matrix and places results into the result matrix. + + The scalar to divide. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.ColumnCount != rightSide.Count. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.RowCount. + If this.ColumnCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ). + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.Rows. + If the result matrix's dimensions are not the this.Rows x other.Columns. + + + + Multiplies this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.Rows. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with the conjugate transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the conjugate transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Raises this square matrix to a positive integer exponent and places the results into the result matrix. + + The positive integer exponent to raise the matrix to. + The result of the power. + + + + Multiplies this square matrix with another matrix and returns the result. + + The positive integer exponent to raise the matrix to. + + + + Negate each element of this matrix. + + A matrix containing the negated values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + if the result matrix's dimensions are not the same as this matrix. + + + + Complex conjugate each element of this matrix. + + A matrix containing the conjugated values. + + + + Complex conjugate each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + if the result matrix's dimensions are not the same as this matrix. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Pointwise multiplies this matrix with another matrix. + + The matrix to pointwise multiply with this one. + If this matrix and are not the same size. + A new matrix that is the pointwise multiplication of this matrix and . + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise divide this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + A new matrix that is the pointwise division of this matrix and . + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise modulus. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise remainder. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Helper function to apply a unary function to a matrix. The function + f modifies the matrix given to it in place. Before its + called, a copy of the 'this' matrix is first created, then passed to + f. The copy is then returned as the result + + Function which takes a matrix, modifies it in place and returns void + New instance of matrix which is the result + + + + Helper function to apply a unary function which modifies a matrix + in place. + + Function which takes a matrix, modifies it in place and returns void + The matrix to be passed to f and where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two matrices + and modifies the latter in place. A copy of the "this" matrix is + first made and then passed to f together with the other matrix. The + copy is then returned as the result + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The resulting matrix + If this matrix and are not the same dimension. + + + + Helper function to apply a binary function which takes two matrices + and modifies the second one in place + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The matrix to store the result. + The resulting matrix + If this matrix and are not the same dimension. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + + + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + The other matrix 'y' + The matrix with the result and 'x' + + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Calculates the rank of the matrix. + + effective numerical rank, obtained from SVD + + + + Calculates the nullity of the matrix. + + effective numerical nullity, obtained from SVD + + + Calculates the condition number of this matrix. + The condition number of the matrix. + The condition number is calculated using singular value decomposition. + + + Computes the determinant of this matrix. + The determinant of this matrix. + + + + Computes an orthonormal basis for the null space of this matrix, + also known as the kernel of the corresponding matrix transformation. + + + + + Computes an orthonormal basis for the column space of this matrix, + also known as the range or image of the corresponding matrix transformation. + + + + Computes the inverse of this matrix. + The inverse of this matrix. + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. + In a later release, it will be replaced with a sparse implementation. + + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Returns a string that describes the type, dimensions and shape of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes this matrix. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Matrix class. + + + + + Gets the raw matrix data storage. + + + + + Gets the number of columns. + + The number of columns. + + + + Gets the number of rows. + + The number of rows. + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + + + + Sets the value of the given element without range checking. + + + The row of the element. + + + The column of the element. + + + The value to set the element to. + + + + + Sets all values to zero. + + + + + Sets all values of a row to zero. + + + + + Sets all values of a column to zero. + + + + + Sets all values for all of the chosen rows to zero. + + + + + Sets all values for all of the chosen columns to zero. + + + + + Sets all values of a sub-matrix to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Creates a clone of this instance. + + + A clone of the instance. + + + + + Copies the elements of this matrix to the given matrix. + + + The matrix to copy values into. + + + If target is . + + + If this and the target matrix do not have the same dimensions.. + + + + + Copies a row into an Vector. + + The row to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of rows. + + + + Copies a row into to the given Vector. + + The row to copy. + The Vector to copy the row into. + If the result vector is . + If is negative, + or greater than or equal to the number of rows. + If this.Columns != result.Count. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of rows. + is negative, + or greater than or equal to the number of columns. + (columnIndex + length) >= Columns. + If is not positive. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Copies a column into a new Vector>. + + The column to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of columns. + + + + Copies a column into to the given Vector. + + The column to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If this.Rows != result.Count. + + + + Copies the requested column elements into a new Vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of columns. + is negative, + or greater than or equal to the number of rows. + (rowIndex + length) >= Rows. + + If is not positive. + + + + Copies the requested column elements into the given vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Returns the elements of the diagonal in a Vector. + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a new matrix and inserts the given column at the given index. + + The index of where to insert the column. + The column to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of columns. + If the size of != the number of rows. + + + + Creates a new matrix with the given column removed. + + The index of the column to remove. + A new matrix without the chosen column. + If is < zero or >= the number of columns. + + + + Copies the values of the given Vector to the specified column. + + The column to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given Vector to the specified sub-column. + + The column to copy the values to. + The row to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given array to the specified column. + + The column to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + If the size of does not + equal the number of rows of this Matrix. + + + + Creates a new matrix and inserts the given row at the given index. + + The index of where to insert the row. + The row to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of rows. + If the size of != the number of columns. + + + + Creates a new matrix with the given row removed. + + The index of the row to remove. + A new matrix without the chosen row. + If is < zero or >= the number of rows. + + + + Copies the values of the given Vector to the specified row. + + The row to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given Vector to the specified sub-row. + + The row to copy the values to. + The column to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given array to the specified row. + + The row to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The column to start copying to. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The number of rows to copy. Must be positive. + The column to start copying to. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The row of the sub-matrix to start copying from. + The number of rows to copy. Must be positive. + The column to start copying to. + The column of the sub-matrix to start copying from. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of the given Vector to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Returns the transpose of this matrix. + + The transpose of this matrix. + + + + Puts the transpose of this matrix into the result matrix. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + + + + Concatenates this matrix with the given matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Concatenates this matrix with the given matrix and places the result into the result matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, + where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. + The values of off the off diagonal matrices/blocks are set to zero. + + The lower, right matrix. + If lower is . + the combined matrix + + + + + + Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. + + The lower, right matrix. + The combined matrix + If lower is . + If the result matrix is . + If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). + + + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Returns this matrix as a multidimensional array. + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + + A multidimensional containing the values of this matrix. + + + + Returns the matrix's elements as an array with the data laid out column by column (column major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the matrix's elements as an array with the data laid row by row (row major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns this matrix as array of row arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns this matrix as array of column arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the matrix will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowArrays instead if you always need an independent array. + + + + + Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnArrays instead if you always need an independent array. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. + + The column to start enumerating over. + The number of columns to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. + + The column to start enumerating over. + The number of columns to enumerating over. + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. + + The row to start enumerating over. + The number of rows to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. + + The row to start enumerating over. + The number of rows to enumerating over. + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Applies a function to each value of this matrix and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value with its result. + The row and column indices of each value (zero-based) are passed as first arguments to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each row. + + + + + For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each column. + + + + + Applies a function f to each row vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Applies a function f to each column vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Reduces all row vectors by applying a function between two of them, until only a single vector is left. + + + + + Reduces all column vectors by applying a function between two of them, until only a single vector is left. + + + + + Applies a function to each value pair of two matrices and replaces the value in the result vector. + + + + + Applies a function to each value pair of two matrices and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two matrices and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two matrices of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to add. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to add. + The right matrix to add. + The result of the addition. + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts a scalar from each element of a matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to subtract. + The scalar value to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts each element of a matrix from a scalar. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Divides a scalar with a matrix. + + The scalar to divide. + The matrix. + The result of the division. + If is . + + + + Divides a matrix with a scalar. + + The matrix to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of the matrix of the given divisor. + + The matrix whose elements we want to compute the modulus of. + The divisor to use. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the matrix. + + The dividend we want to compute the modulus of. + The matrix whose elements we want to use as divisor. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two matrices. + + The matrix whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a matrix pointwise + + The input matrix + + + + + Computes the exponential of a matrix pointwise + + The input matrix + + + + + Computes the log of a matrix pointwise + + The input matrix + + + + + Computes the log10 of a matrix pointwise + + The input matrix + + + + + Computes the sin of a matrix pointwise + + The input matrix + + + + + Computes the cos of a matrix pointwise + + The input matrix + + + + + Computes the tan of a matrix pointwise + + The input matrix + + + + + Computes the asin of a matrix pointwise + + The input matrix + + + + + Computes the acos of a matrix pointwise + + The input matrix + + + + + Computes the atan of a matrix pointwise + + The input matrix + + + + + Computes the sinh of a matrix pointwise + + The input matrix + + + + + Computes the cosh of a matrix pointwise + + The input matrix + + + + + Computes the tanh of a matrix pointwise + + The input matrix + + + + + Computes the absolute value of a matrix pointwise + + The input matrix + + + + + Computes the floor of a matrix pointwise + + The input matrix + + + + + Computes the ceiling of a matrix pointwise + + The input matrix + + + + + Computes the rounded value of a matrix pointwise + + The input matrix + + + + + Computes the Cholesky decomposition for a matrix. + + The Cholesky decomposition object. + + + + Computes the LU decomposition for a matrix. + + The LU decomposition object. + + + + Computes the QR decomposition for a matrix. + + The type of QR factorization to perform. + The QR decomposition object. + + + + Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. + + The QR decomposition object. + + + + Computes the SVD decomposition for a matrix. + + Compute the singular U and VT vectors or not. + The SVD decomposition object. + + + + Computes the EVD decomposition for a matrix. + + The EVD decomposition object. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The result matrix X. + + + + Converts a matrix to single precision. + + + + + Converts a matrix to double precision. + + + + + Converts a matrix to single precision complex numbers. + + + + + Converts a matrix to double precision complex numbers. + + + + + Gets a single precision complex matrix with the real parts from the given matrix. + + + + + Gets a double precision complex matrix with the real parts from the given matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Existing data may not be all zeros, so clearing may be necessary + if not all of it will be overwritten anyway. + + + + + If existing data is assumed to be all zeros already, + clearing it may be skipped if applicable. + + + + + Allow skipping zero entries (without enforcing skipping them). + When enumerating sparse matrices this can significantly speed up operations. + + + + + Force applying the operation to all fields even if they are zero. + + + + + It is not known yet whether a matrix is symmetric or not. + + + + + A matrix is symmetric + + + + + A matrix is Hermitian (conjugate symmetric). + + + + + A matrix is not symmetric + + + + + Defines an that uses a cancellation token as stop criterion. + + + + + Initializes a new instance of the class. + + + + + Initializes a new instance of the class. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Stop criterion that delegates the status determination to a delegate. + + + + + Create a new instance of this criterion with a custom implementation. + + Custom implementation with the same signature and semantics as the DetermineStatus method. + + + + Determines the status of the iterative calculation by delegating it to the provided delegate. + Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + + + + Clones this criterion and its settings. + + + + + Monitors an iterative calculation for signs of divergence. + + + + + The maximum relative increase the residual may experience without triggering a divergence warning. + + + + + The number of iterations over which a residual increase should be tracked before issuing a divergence warning. + + + + + The status of the calculation + + + + + The array that holds the tracking information. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified maximum + relative increase and the specified minimum number of tracking iterations. + + The maximum relative increase that the residual may experience before a divergence warning is issued. + The minimum number of iterations over which the residual must grow before a divergence warning is issued. + + + + Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. + + Thrown if the Maximum is set to zero or below. + + + + Gets or sets the minimum number of iterations over which the residual must grow before + issuing a divergence warning. + + Thrown if the value is set to less than one. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Detect if solution is diverging + + true if diverging, otherwise false + + + + Gets required history Length + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Defines an that monitors residuals for NaN's. + + + + + The status of the calculation + + + + + The iteration number of the last iteration. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + The base interface for classes that provide stop criteria for iterative calculations. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current IIterationStopCriterion. Status is set to Status field of current object. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + is not a legal value. Status should be set in implementation. + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + To implementers: Invoking this method should not clear the user defined + property values, only the state that is used to track the progress of the + calculation. + + + + Defines the interface for classes that solve the matrix equation Ax = b in + an iterative manner. + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Defines the interface for objects that can create an iterative solver with + specific settings. This interface is used to pass iterative solver creation + setup information around. + + + + + Gets the type of the solver that will be created by this setup object. + + + + + Gets type of preconditioner, if any, that will be created by this setup object. + + + + + Creates the iterative solver to be used. + + + + + Creates the preconditioner to be used by default (can be overwritten). + + + + + Gets the relative speed of the solver. + + Returns a value between 0 and 1, inclusive. + + + + Gets the relative reliability of the solver. + + Returns a value between 0 and 1 inclusive. + + + + The base interface for preconditioner classes. + + + + Preconditioners are used by iterative solvers to improve the convergence + speed of the solving process. Increase in convergence speed + is related to the number of iterations necessary to get a converged solution. + So while in general the use of a preconditioner means that the iterative + solver will perform fewer iterations it does not guarantee that the actual + solution time decreases given that some preconditioners can be expensive to + setup and run. + + + Note that in general changes to the matrix will invalidate the preconditioner + if the changes occur after creating the preconditioner. + + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix on which the preconditioner is based. + + + + Approximates the solution to the matrix equation Mx = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Defines an that monitors the numbers of iteration + steps as stop criterion. + + + + + The default value for the maximum number of iterations the process is allowed + to perform. + + + + + The maximum number of iterations the calculation is allowed to perform. + + + + + The status of the calculation + + + + + Initializes a new instance of the class with the default maximum + number of iterations. + + + + + Initializes a new instance of the class with the specified maximum + number of iterations. + + The maximum number of iterations the calculation is allowed to perform. + + + + Gets or sets the maximum number of iterations the calculation is allowed to perform. + + Thrown if the Maximum is set to a negative value. + + + + Returns the maximum number of iterations to the default. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Iterative Calculation Status + + + + + An iterator that is used to check if an iterative calculation should continue or stop. + + + + + The collection that holds all the stop criteria and the flag indicating if they should be added + to the child iterators. + + + + + The status of the iterator. + + + + + Initializes a new instance of the class with the default stop criteria. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Gets the current calculation status. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual iterators may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Indicates to the iterator that the iterative process has been cancelled. + + + Does not reset the stop-criteria. + + + + + Resets the to the pre-calculation state. + + + + + Creates a deep clone of the current iterator. + + The deep clone of the current iterator. + + + + Defines an that monitors residuals as stop criterion. + + + + + The maximum value for the residual below which the calculation is considered converged. + + + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + The status of the calculation + + + + + The number of iterations since the residuals got below the maximum. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified + maximum residual and minimum number of iterations. + + + The maximum value for the residual below which the calculation is considered converged. + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + Gets or sets the maximum value for the residual below which the calculation is considered + converged. + + Thrown if the Maximum is set to a negative value. + + + + Gets or sets the minimum number of iterations for which the residual has to be + below the maximum before the calculation is considered converged. + + Thrown if the BelowMaximumFor is set to a value less than 1. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Loads the available objects from the specified assembly. + + The assembly which will be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The type in the assembly which should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The of the assembly that should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + + + + A unit preconditioner. This preconditioner does not actually do anything + it is only used when running an without + a preconditioner. + + + + + The coefficient matrix on which this preconditioner operates. + Is used to check dimensions on the different vectors that are processed. + + + + + Initializes the preconditioner and loads the internal data structures. + + + The matrix upon which the preconditioner is based. + + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + If and do not have the same size. + + + - or - + + + If the size of is different the number of rows of the coefficient matrix. + + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Evaluate the row and column at a specific data index. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + + The array containing the row indices of the existing rows. Element "i" of the array gives the index of the + element in the array that is first non-zero element in a row "i". + The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always + given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. + + + + + An array containing the column indices of the non-zero values. Element "j" of the array + is the number of the column in matrix that contains the j-th value in the array. + + + + + Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values + array using the row-major storage mapping described in a compressed sparse row (CSR) format. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Delete value from internal storage + + Index of value in nonZeroValues array + Row number of matrix + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Find item Index in nonZeroValues array + + Matrix row index + Matrix column index + Item index + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Array that contains the indices of the non-zero values. + + + + + Array that contains the non-zero elements of the vector. + + + + + Gets the number of non-zero elements in the vector. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the vector storage format is dense. + + + + + Gets or sets the value at the given index, with range checking. + + + The index of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + The index of the element. + The requested element. + Not range-checked. + + + + Sets the element without range checking. + + The index of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + + Defines the generic class for Vector classes. + + Supported data types are double, single, , and . + + + + The zero value for type T. + + + + + The value of 1.0 for type T. + + + + + Negates vector and save result to + + Target vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar denominator to use. + The vector to store the result of the division. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar numerator to use. + The vector to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Adds a scalar to each element of the vector. + + The scalar to add. + A copy of the vector with the scalar added. + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + If this vector and are not the same size. + + + + Adds another vector to this vector. + + The vector to add to this one. + A new vector containing the sum of both vectors. + If this vector and are not the same size. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Subtracts a scalar from each element of the vector. + + The scalar to subtract. + A new vector containing the subtraction of this vector and the scalar. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Subtracts each element of the vector from a scalar. + + The scalar to subtract from. + A new vector containing the subtraction of the scalar and this vector. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Returns a negated vector. + + The negated vector. + Added as an alternative to the unary negation operator. + + + + Negates vector and save result to + + Target vector + + + + Subtracts another vector from this vector. + + The vector to subtract from this one. + A new vector containing the subtraction of the two vectors. + If this vector and are not the same size. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Return vector with complex conjugate values of the source vector + + Conjugated vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector. + + The scalar to multiply. + A new vector that is the multiplication of the vector and the scalar. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + If this vector and are not the same size. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + If is not of the same size. + + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + If is not of the same size. + If is . + + + + + Divides each element of the vector by a scalar. + + The scalar to divide with. + A new vector that is the division of the vector and the scalar. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar to divide with. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Divides a scalar by each element of the vector. + + The scalar to divide. + A new vector that is the division of the vector and the scalar. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector. + + The vector to pointwise multiply with this one. + A new vector which is the pointwise multiplication of the two vectors. + If this vector and are not the same size. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector. + + The pointwise denominator vector to use. + A new vector which is the pointwise division of the two vectors. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise division. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The matrix to store the result into. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + The vector to store the result into. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise modulus. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise remainder. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Helper function to apply a unary function to a vector. The function + f modifies the vector given to it in place. Before its + called, a copy of the 'this' vector with the same dimension is + first created, then passed to f. The copy is returned as the result + + Function which takes a vector, modifies it in place and returns void + New instance of vector which is the result + + + + Helper function to apply a unary function which modifies a vector + in place. + + Function which takes a vector, modifies it in place and returns void + The vector where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes a scalar and + a vector and modifies the latter in place. A copy of the "this" + vector is therefore first made and then passed to f together with + the scalar argument. The copy is then returned as the result + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The resulting vector + + + + Helper function to apply a binary function which takes a scalar and + a vector, modifies the latter in place and returns void. + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The vector where the result will be placed + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the latter in place. A copy of the "this" vector is + first made and then passed to f together with the other vector. The + copy is then returned as the result + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the second one in place + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + The vector to store the result + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. + + The other vector + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = (sum(abs(this[i])^p))^(1/p) + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + The p value. + This vector normalized to a unit vector with respect to the p-norm. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the value of maximum element. + + The value of maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the value of the minimum element. + + The value of the minimum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Computes the sum of the absolute value of the vector's elements. + + The sum of the absolute value of the vector's elements. + + + + Indicates whether the current object is equal to another object of the same type. + + An object to compare with this object. + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Returns an enumerator that iterates through the collection. + + + A that can be used to iterate through the collection. + + + + + Returns an enumerator that iterates through a collection. + + + An object that can be used to iterate through the collection. + + + + + Returns a string that describes the type, dimensions and shape of this vector. + + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Character to use to print if there is not enough space to print all entries. Typical value: "..". + Character to use to separate two columns on a line. Typical value: " " (2 spaces). + Character to use to separate two rows/lines. Typical value: Environment.NewLine. + Function to provide a string for any given entry value. + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that represents the content of this vector, column by column. + + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector, column by column and with a type header. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Vector class. + + + + + Gets the raw vector data storage. + + + + + Gets the length or number of dimensions of this vector. + + + + Gets or sets the value at the given . + The index of the value to get or set. + The value of the vector at the given . + If is negative or + greater than the size of the vector. + + + Gets the value at the given without range checking.. + The index of the value to get or set. + The value of the vector at the given . + + + Sets the at the given without range checking.. + The index of the value to get or set. + The value to set. + + + + Resets all values to zero. + + + + + Sets all values of a subvector to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Returns a deep-copy clone of the vector. + + A deep-copy clone of the vector. + + + + Set the values of this vector to the given values. + + The array containing the values to use. + If is . + If is not the same size as this vector. + + + + Copies the values of this vector into the target vector. + + The vector to copy elements into. + If is . + If is not the same size as this vector. + + + + Creates a vector containing specified elements. + + The first element to begin copying from. + The number of elements to copy. + A vector containing a copy of the specified elements. + If is not positive or + greater than or equal to the size of the vector. + If + is greater than or equal to the size of the vector. + + If is not positive. + + + + Copies the values of a given vector into a region in this vector. + + The field to start copying to + The number of fields to copy. Must be positive. + The sub-vector to copy from. + If is + + + + Copies the requested elements from this vector to another. + + The vector to copy the elements to. + The element to start copying from. + The element to start copying to. + The number of elements to copy. + + + + Returns the data contained in the vector as an array. + The returned array will be independent from this vector. + A new memory block will be allocated for the array. + + The vector's data as an array. + + + + Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the vector will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Create a matrix based on this vector in column form (one single column). + + + This vector as a column matrix. + + + + + Create a matrix based on this vector in row form (one single row). + + + This vector as a row matrix. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Applies a function to each value of this vector and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value with its result. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value pair of two vectors and replaces the value in the result vector. + + + + + Applies a function to each value pair of two vectors and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two vectors and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two vectors of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Vector containing the same values of . + + This method is included for completeness. + The vector to get the values from. + A vector containing the same values as . + If is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Adds a scalar to each element of a vector. + + The vector to add to. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of a vector. + + The scalar value to add. + The vector to add to. + The result of the addition. + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of a vector. + + The vector to subtract from. + The scalar value to subtract. + The result of the subtraction. + If is . + + + + Subtracts each element of a vector from a scalar. + + The scalar value to subtract from. + The vector to subtract. + The result of the subtraction. + If is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a scalar with a vector. + + The scalar to divide. + The vector. + The result of the division. + If is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Pointwise divides two Vectors. + + The vector to divide. + The other vector. + The result of the division. + If and are not the same size. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the vector. + + The dividend we want to compute the remainder of. + The vector whose elements we want to use as divisor. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two vectors. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a vector pointwise + + The input vector + + + + + Computes the exponential of a vector pointwise + + The input vector + + + + + Computes the log of a vector pointwise + + The input vector + + + + + Computes the log10 of a vector pointwise + + The input vector + + + + + Computes the sin of a vector pointwise + + The input vector + + + + + Computes the cos of a vector pointwise + + The input vector + + + + + Computes the tan of a vector pointwise + + The input vector + + + + + Computes the asin of a vector pointwise + + The input vector + + + + + Computes the acos of a vector pointwise + + The input vector + + + + + Computes the atan of a vector pointwise + + The input vector + + + + + Computes the sinh of a vector pointwise + + The input vector + + + + + Computes the cosh of a vector pointwise + + The input vector + + + + + Computes the tanh of a vector pointwise + + The input vector + + + + + Computes the absolute value of a vector pointwise + + The input vector + + + + + Computes the floor of a vector pointwise + + The input vector + + + + + Computes the ceiling of a vector pointwise + + The input vector + + + + + Computes the rounded value of a vector pointwise + + The input vector + + + + + Converts a vector to single precision. + + + + + Converts a vector to double precision. + + + + + Converts a vector to single precision complex numbers. + + + + + Converts a vector to double precision complex numbers. + + + + + Gets a single precision complex vector with the real parts from the given vector. + + + + + Gets a double precision complex vector with the real parts from the given vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response vector Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response matrix Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor-Response samples as tuples + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor-Response samples as tuples + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response matrix Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Weighted Linear Regression using normal equations. + + List of sample vectors (predictor) together with their response. + List of weights, one for each sample. + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Locally-Weighted Linear Regression using normal equations. + + + + + Locally-Weighted Linear Regression using normal equations. + + + + + First Order AB method(same as Forward Euler) + + Initial value + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Second Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Third Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Fourth Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + ODE Solver Algorithms + + + + + Second Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Second Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems + http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz + + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The lower bound + The upper bound + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems + + + + + Creates BFGS minimizer + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + + Creates a base class for BFGS minimization + + + + + Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima + See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm + Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp + + + + + Finds a minimum of a function by the BFGS quasi-Newton method + This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian + + An initial guess + Evaluates the function at a point + Evaluates the gradient of the function at a point + The minimum found + + + + Objective function with a frozen evaluation that must not be changed from the outside. + + + + Create a new unevaluated and independent copy of this objective function + + + + Objective function with a mutable evaluation. + + + + Create a new independent copy of this objective function, evaluated at the same point. + + + + Get the y-values of the observations. + + + + + Get the values of the weights for the observations. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the values of the parameters. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector. G = J'(y - f(x; p)) + + + + + Get the approximated Hessian matrix. H = J'J + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Get the degree of freedom. + + + + + The scale factor for initial mu + + + + + Non-linear least square fitting by the Levenberg-Marduardt algorithm. + + The objective function, including model, observations, and parameter bounds. + The initial guess values. + The initial damping parameter of mu. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for L2 norm of the residuals. + The max iterations. + The result of the Levenberg-Marquardt minimization + + + + Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm + + + + + + Creates L-BFGS minimizer + + Numbers of gradients and steps to store. + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe + Conditions are + i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) + ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) + where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. + + Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + references: + http://en.wikipedia.org/wiki/Wolfe_conditions + http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + + + Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + + + + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + The upper bound + + + + Creates a base class for minimization + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. + Called fminsearch() in Matlab. A description of the algorithm can be found at + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + or + https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method + + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Evaluate the objective function at each vertex to create a corresponding + list of error values for each vertex + + + + + + + + Check whether the points in the error profile have so little range that we + consider ourselves to have converged + + + + + + + + + Examine all error values to determine the ErrorProfile + + + + + + + Construct an initial simplex, given starting guesses for the constants, and + initial step sizes for each dimension + + + + + + + Test a scaling operation of the high point, and replace it if it is an improvement + + + + + + + + + + + Contract the simplex uniformly around the lowest point + + + + + + + + + Compute the centroid of all points except the worst + + + + + + + + The value of the constant + + + + + Returns the best fit parameters. + + + + + Returns the standard errors of the corresponding parameters + + + + + Returns the y-values of the fitted model that correspond to the independent values. + + + + + Returns the covariance matrix at minimizing point. + + + + + Returns the correlation matrix at minimizing point. + + + + + The stopping threshold for the function value or L2 norm of the residuals. + + + + + The stopping threshold for L2 norm of the change of the parameters. + + + + + The stopping threshold for infinity norm of the gradient. + + + + + The maximum number of iterations. + + + + + The lower bound of the parameters. + + + + + The upper bound of the parameters. + + + + + The scale factors for the parameters. + + + + + Objective function where neither Gradient nor Hessian is available. + + + + + Objective function where the Gradient is available. Greedy evaluation. + + + + + Objective function where the Gradient is available. Lazy evaluation. + + + + + Objective function where the Hessian is available. Greedy evaluation. + + + + + Objective function where the Hessian is available. Lazy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Greedy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Lazy evaluation. + + + + + Objective function where neither first nor second derivative is available. + + + + + Objective function where the first derivative is available. + + + + + Objective function where the first and second derivatives are available. + + + + + objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective function with a user supplied jacobian for nonlinear least squares regression. + + + + + Objective function for nonlinear least squares regression. + The numerical jacobian with accuracy order is used. + + + + + Adapts an objective function with only value implemented + to provide a gradient as well. Gradient calculation is + done using the finite difference method, specifically + forward differences. + + For each gradient computed, the algorithm requires an + additional number of function evaluations equal to the + functions's number of input parameters. + + + + + Set or get the values of the independent variable. + + + + + Set or get the values of the observations. + + + + + Set or get the values of the weights for the observations. + + + + + Get whether parameters are fixed or free. + + + + + Get the number of observations. + + + + + Get the number of unknown parameters. + + + + + Get the degree of freedom + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Set or get the values of the parameters. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector of x and p. + + + + + Get the Hessian matrix of x and p, J'WJ + + + + + Set observed data to fit. + + + + + Set parameters and bounds. + + The initial values of parameters. + The list to the parameters fix or free. + + + + Non-linear least square fitting by the trust region dogleg algorithm. + + + + + The trust region subproblem. + + + + + The stopping threshold for the trust region radius. + + + + + Non-linear least square fitting by the trust-region algorithm. + + The objective model, including function, jacobian, observations, and parameter bounds. + The subproblem + The initial guess values. + The stopping threshold for L2 norm of the residuals. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for trust region radius + The max iterations. + + + + + Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. + + + + + Class to represent a permutation for a subset of the natural numbers. + + + + + Entry _indices[i] represents the location to which i is permuted to. + + + + + Initializes a new instance of the Permutation class. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + + + + Gets the number of elements this permutation is over. + + + + + Computes where permutes too. + + The index to permute from. + The index which is permuted to. + + + + Computes the inverse of the permutation. + + The inverse of the permutation. + + + + Construct an array from a sequence of inversions. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + The set of inversions to construct the permutation from. + A permutation generated from a sequence of inversions. + + + + Construct a sequence of inversions from the permutation. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + A sequence of inversions. + + + + Checks whether the array represents a proper permutation. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + True if represents a proper permutation, false otherwise. + + + + A single-variable polynomial with real-valued coefficients and non-negative exponents. + + + + + The coefficients of the polynomial in a + + + + + Only needed for the ToString method + + + + + Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. + The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. + + + + + Create a zero-polynomial with a coefficient array of the given length. + An array of length N can support polynomials of a degree of at most N-1. + + Length of the coefficient array + + + + Create a zero-polynomial + + + + + Create a constant polynomial. + Example: 3.0 -> "p : x -> 3.0" + + The coefficient of the "x^0" monomial. + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as array + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as enumerable + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k + + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Calculates the complex roots of the Polynomial by eigenvalue decomposition + + a vector of complex numbers with the roots + + + + Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. + + Eigenvalue matrix A + This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix + + + + Addition of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a polynomial and a scalar. + + + + + Subtraction of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a scalar from a polynomial. + + + + + Addition of a polynomial from a scalar. + + + + + Negation of a polynomial. + + + + + Multiplies a polynomial by a polynomial (convolution) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Scales a polynomial by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Scales a polynomial by division by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r + + Left polynomial + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Point-wise division of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Point-wise multiplication of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Division of two polynomials returning the quotient-with-remainder of the two polynomials given + + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Addition of two Polynomials (piecewise) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Scalar value + Polynomial + Resulting Polynomial + + + + Subtraction of two polynomial. + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Subtracts a scalar from a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + Subtracts a polynomial from a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Negates a polynomial. + + Polynomial + Resulting Polynomial + + + + Multiplies a polynomial by a polynomial (convolution). + + Left polynomial + Right polynomial + resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Polynomial + Scalar value + Resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Divides a polynomial by scalar value. + + Polynomial + Scalar value + Resulting Polynomial + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Utilities for working with floating point numbers. + + + + Useful links: + + + http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic + + + http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon + + + + + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The relative accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Checks if a given double values is finite, i.e. neither NaN nor inifnity + + The value to be checked fo finitenes. + + + + The number of binary digits used to represent the binary number for a double precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + The number of binary digits used to represent the binary number for a single precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + On a standard machine this is equivalent to `DoublePrecision`. + + + + + Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + On a standard machine this is equivalent to `PositiveDoublePrecision`. + + + + + The number of significant decimal places of double-precision floating numbers (64 bit). + + + + + The number of significant decimal places of single-precision floating numbers (32 bit). + + + + + Value representing 10 * 2^(-53) = 1.11022302462516E-15 + + + + + Value representing 10 * 2^(-24) = 5.96046447753906E-07 + + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. + + The value. + The value of the number. + + + + Returns a 'directional' long value. This is a long value which acts the same as a double, + e.g. a negative double value will return a negative double value starting at 0 and going + more negative as the double value gets more negative. + + The input double value. + A long value which is roughly the equivalent of the double value. + + + + Returns a 'directional' int value. This is a int value which acts the same as a float, + e.g. a negative float value will return a negative int value starting at 0 and going + more negative as the float value gets more negative. + + The input float value. + An int value which is roughly the equivalent of the double value. + + + + Increments a floating point number to the next bigger number representable by the data type. + + The value which needs to be incremented. + How many times the number should be incremented. + + The incrementation step length depends on the provided value. + Increment(double.MaxValue) will return positive infinity. + + The next larger floating point value. + + + + Decrements a floating point number to the next smaller number representable by the data type. + + The value which should be decremented. + How many times the number should be decremented. + + The decrementation step length depends on the provided value. + Decrement(double.MinValue) will return negative infinity. + + The next smaller floating point value. + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The absolute threshold for to consider it as zero. + Zero if || is smaller than , otherwise. + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero. + + The real number to coerce to zero, if it is almost zero. + Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. + + + + Determines the range of floating point numbers that will match the specified value with the given tolerance. + + The value. + The ulps difference. + + Thrown if is smaller than zero. + + Tuple of the bottom and top range ends. + + + + Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is + always bigger than the value) + + The value. + The ulps difference. + The maximum floating point number which is larger than the given . + + + + Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is + always smaller than the value) + + The value. + The ulps difference. + The minimum floating point number which is smaller than the given . + + + + Determines the range of ulps that will match the specified value with the given tolerance. + + The value. + The relative difference. + + Thrown if is smaller than zero. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Tuple with the number of ULPS between the value and the value - relativeDifference as first, + and the number of ULPS between the value and the value + relativeDifference as second value. + + + + + Evaluates the count of numbers between two double numbers + + The first parameter. + The second parameter. + The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. + The number of floating point values between and . + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive double or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive float or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive double or NaN) + Evaluates the positive epsilon. See also + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive float or NaN) + Evaluates the positive epsilon. See also + + + + + Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + + Positive Machine epsilon + + + + Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + + Machine epsilon + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + Thrown if is smaller than zero. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + + + Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps + between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance + of 1 is passed then the result will be true only if the two numbers have the same binary representation + OR if they are two adjacent numbers that only differ by one step. + + + The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article + at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to + .NET enabled code without using pointers and unsafe code. + + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two vectors and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Support Interface for Precision Operations (like AlmostEquals). + + Type of the implementing class. + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + A norm of this value. + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + The value to compare with. + A norm of the difference between this and the other value. + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + Revision + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + Frees the memory allocated to the MKL memory pool. + + + + + Frees the memory allocated to the MKL memory pool on the current thread. + + + + + Disable the MKL memory pool. May impact performance. + + + + + Retrieves information about the MKL memory pool. + + On output, returns the number of memory buffers allocated. + Returns the number of bytes allocated to all memory buffers. + + + + Enable gathering of peak memory statistics of the MKL memory pool. + + + + + Disable gathering of peak memory statistics of the MKL memory pool. + + + + + Measures peak memory usage of the MKL memory pool. + + Whether the usage counter should be reset. + The peak number of bytes allocated to all memory buffers. + + + + Disable gathering memory usage + + + + + Enable gathering memory usage + + + + + Return peak memory usage + + + + + Return peak memory usage and reset counter + + + + + Consistency vs. performance trade-off between runs on different machines. + + + + Consistent on the same CPU only (maximum performance) + + + Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) + + + Consistent on Intel CPUs supporting SSE2 or later + + + Consistent on Intel CPUs supporting SSE4.2 or later + + + Consistent on Intel CPUs supporting AVX or later + + + Consistent on Intel CPUs supporting AVX2 or later + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + + Helper class to load native libraries depending on the architecture of the OS and process. + + + + + Dictionary of handles to previously loaded libraries, + + + + + Gets a string indicating the architecture and bitness of the current process. + + + + + If the last native library failed to load then gets the corresponding exception + which occurred or null if the library was successfully loaded. + + + + + Load the native library with the given filename. + + The file name of the library to load. + Hint path where to look for the native binaries. Can be null. + True if the library was successfully loaded or if it has already been loaded. + + + + Try to load a native library by providing its name and a directory. + Tries to load an implementation suitable for the current CPU architecture + and process mode if there is a matching subfolder. + + True if the library was successfully loaded or if it has already been loaded. + + + + Try to load a native library by providing the full path including the file name of the library. + + True if the library was successfully loaded or if it has already been loaded. + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + + Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsFFTProviderPath` or the default probing paths. + + + + + Try to use a native provider, if available. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsFFTProvider" environment variable, + or fall back to the best provider. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 + will cause k*k in the Bluestein sequence to overflow (GH-286). + + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Half rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Fully rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Radix-2 Reorder Helper Method + + Sample type + Sample vector + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If calling this method fails, consider to fall back to alternatives like the managed provider. + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + How to transpose a matrix. + + + + + Don't transpose a matrix. + + + + + Transpose a matrix. + + + + + Conjugate transpose a complex matrix. + + If a conjugate transpose is used with a real matrix, then the matrix is just transposed. + + + + Types of matrix norms. + + + + + The 1-norm. + + + + + The Frobenius norm. + + + + + The infinity norm. + + + + + The largest absolute value norm. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + Supported data types are Double, Single, Complex, and Complex32. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiply elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the full QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by QR factor. This is only used for the managed provider and can be + null for the native provider. The native provider uses the Q portion stored in the R matrix. + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + On entry the B matrix; on exit the X matrix. + The number of columns of B. + On exit, the solution matrix. + Rows must be greater or equal to columns. + The type of QR factorization to perform. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Gets or sets the linear algebra provider. + Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsLAProviderPath` or the default probing paths. + + + + + Try to use a native provider, if available. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsLAProvider" environment variable, + or fall back to the best provider. + + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + Hint path where to look for the native binaries + + Sets the desired bit consistency on repeated identical computations on varying CPU architectures, + as a trade-off with performance. + + VML optimal precision and rounding. + VML accuracy mode. + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If calling this method fails, consider to fall back to alternatives like the managed provider. + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Error codes return from the MKL provider. + + + + + Unable to allocate memory. + + + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Error codes return from the native OpenBLAS provider. + + + + + Unable to allocate memory. + + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + Uses and uses the value of + to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + Uses the value of to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + Uses + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + if set to true , the class is thread safe. + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Random number generator using Mersenne Twister 19937 algorithm. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + Uses the value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + A 32-bit combined multiple recursive generator with 2 components of order 3. + + Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. + + + The type bases upon the implementation in the + Boost Random Number Library. + It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on + Wikipedia - Lagged Fibonacci generator. + + + + + Default value for the ShortLag + + + + + Default value for the LongLag + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The ShortLag value + TheLongLag value + + + + Gets the short lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Gets the long lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Stores an array of random numbers + + + + + Stores an index for the random number array element that will be accessed next. + + + + + Fills the array with new unsigned random numbers. + + + Generated random numbers are 32-bit unsigned integers greater than or equal to 0 + and less than or equal to . + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + This class implements extension methods for the System.Random class. The extension methods generate + pseudo-random distributed numbers for types other than double and int32. + + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random bytes. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers within the specified range. + + The random number generator. + The array to fill with random values. + Lower bound, inclusive. + Upper bound, exclusive. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative random number less than . + + The random number generator. + + A 64-bit signed integer greater than or equal to 0, and less than ; that is, + the range of return values includes 0 but not . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int32 range. + + The random number generator. + + A 32-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int64 range. + + The random number generator. + + A 64-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative decimal floating point random number less than 1.0. + + The random number generator. + + A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, + the range of return values includes 0.0 but not 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random boolean. + + The random number generator. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Provides a time-dependent seed value, matching the default behavior of System.Random. + WARNING: There is no randomness in this seed and quick repeated calls can cause + the same seed value. Do not use for cryptography! + + + + + Provides a seed based on time and unique GUIDs. + WARNING: There is only low randomness in this seed, but at least quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. + WARNING: There is only medium randomness in this seed, but quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Base class for random number generators. This class introduces a layer between + and the Math.Net Numerics random number generators to provide thread safety. + When used directly it use the System.Random as random number source. + + + + + Initializes a new instance of the class using + the value of to set whether + the instance is thread safe or not. + + + + + Initializes a new instance of the class. + + if set to true , the class is thread safe. + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The array to fill with random values. + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The size of the array to fill. + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than . + + + + + Returns a random number less then a specified maximum. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + A 32-bit signed integer less than . + is zero or negative. + + + + Returns a random number within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. + + is greater than . + + + + Fills an array with random 32-bit signed integers greater than or equal to zero and less than . + + The array to fill with random values. + + + + Returns an array with random 32-bit signed integers greater than or equal to zero and less than . + + The size of the array to fill. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . + + + + + Returns an infinite sequence of random numbers within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Fills the elements of a specified array of bytes with random numbers. + + An array of bytes to contain random numbers. + is null. + + + + Returns a random number between 0.0 and 1.0. + + A double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 1982 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: + An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 2006 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". + Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. + Xn = a * Xn−3 + c mod 2^32 + http://www.jstatsoft.org/v08/i14/paper + + + + + The default value for X1. + + + + + The default value for X2. + + + + + The default value for the multiplier. + + + + + The default value for the carry over. + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Seed or last but three unsigned random number. + + + + + Last but two unsigned random number. + + + + + Last but one unsigned random number. + + + + + The value of the carry over. + + + + + The multiplier. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Note: must be less than . + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Xoshiro256** pseudo random number generator. + A random number generator based on the class in the .NET library. + + + This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has + excellent(sub-ns) speed, a state space(256 bits) that is large enough + for any parallel application, and it passes all tests we are aware of. + + For generating just floating-point numbers, xoshiro256+ is even faster. + + The state must be seeded so that it is not everywhere zero.If you have + a 64-bit seed, we suggest to seed a splitmix64 generator and use its + output to fill s. + + For further details see: + David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". + https://arxiv.org/abs/1805.01407 + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Splitmix64 RNG. + + RNG state. This can take any value, including zero. + A new random UInt64. + + Splitmix64 produces equidistributed outputs, thus if a zero is generated then the + next zero will be after a further 2^64 outputs. + + + + + Bisection root-finding algorithm. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Algorithm by Brent, Van Wijngaarden, Dekker et al. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Helper method useful for preventing rounding errors. + a*sign(b) + + + + Algorithm by Broyden. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + Relative step size for calculating the Jacobian matrix at first step. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Helper method to calculate an approximation of the Jacobian. + + The function. + The argument (initial guess). + The result (of initial guess). + Relative step size for calculating the Jacobian. + + + + Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 + Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html + + + + + Q and R are transformed variables. + + + + + n^(1/3) - work around a negative double raised to (1/3) + + + + + Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false + + + Detect a range containing at least one root. + The function to detect roots from. + Lower value of the range. + Upper value of the range + The growing factor of research. Usually 1.6. + Maximum number of iterations. Usually 50. + True if the bracketing operation succeeded, false otherwise. + This iterative methods stops when two values with opposite signs are found. + + + + Sorting algorithms for single, tuple and triple lists. + + + + + Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. + + The type of elements in the key list. + List to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a range of a list of keys, in place using the quick sort algorithm. + + The type of element in the list. + List to sort. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the primary list. + The type of elements in the secondary list. + List to sort. + List to sort on duplicate primary items, and permute the same way as the key list. + Comparison, defining the primary sort order. + Comparison, defining the secondary sort order. + + + + Recursive implementation for an in place quick sort on a list. + + The type of the list on which the quick sort is performed. + The list which is sorted using quick sort. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. + + The type of the list on which the quick sort is performed. + The type of the list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. + + The type of the list on which the quick sort is performed. + The type of the first list which is automatically reordered accordingly. + The type of the second list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The first list which is automatically reordered accordingly. + The second list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. + + The type of the primary list. + The type of the secondary list. + The list which is sorted using quick sort. + The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. + The method with which to compare two elements of the primary list. + The method with which to compare two elements of the secondary list. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Performs an in place swap of two elements in a list. + + The type of elements stored in the list. + The list in which the elements are stored. + The index of the first element of the swap. + The index of the second element of the swap. + + + + This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the error function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. + + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of Airy function Ai + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of the Airy function Ai. + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Ai. + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi(z). + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Computes the logarithm of the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The logarithm of the Euler Beta function evaluated at z,w. + If or are not positive. + + + + Computes the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The Euler Beta function evaluated at z,w. + If or are not positive. + + + + Returns the lower incomplete (unregularized) beta function + B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The lower incomplete (unregularized) beta function. + + + + Returns the regularized lower incomplete beta function + I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The regularized lower incomplete beta function. + + + + ************************************** + COEFFICIENTS FOR METHOD ErfImp * + ************************************** + + Polynomial coefficients for a numerator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a denominator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + + ************************************** + COEFFICIENTS FOR METHOD ErfInvImp * + ************************************** + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Calculates the error function. + The value to evaluate. + the error function evaluated at given value. + + + returns 1 if x == double.PositiveInfinity. + returns -1 if x == double.NegativeInfinity. + + + + + Calculates the complementary error function. + The value to evaluate. + the complementary error function evaluated at given value. + + + returns 0 if x == double.PositiveInfinity. + returns 2 if x == double.NegativeInfinity. + + + + + Calculates the inverse error function evaluated at z. + The inverse error function evaluated at given value. + + + returns double.PositiveInfinity if z >= 1.0. + returns double.NegativeInfinity if z <= -1.0. + + + Calculates the inverse error function evaluated at z. + value to evaluate. + the inverse error function evaluated at Z. + + + + Implementation of the error function. + + Where to evaluate the error function. + Whether to compute 1 - the error function. + the error function. + + + Calculates the complementary inverse error function evaluated at z. + The complementary inverse error function evaluated at given value. + We have tested this implementation against the arbitrary precision mpmath library + and found cases where we can only guarantee 9 significant figures correct. + + returns double.PositiveInfinity if z <= 0.0. + returns double.NegativeInfinity if z >= 2.0. + + + calculates the complementary inverse error function evaluated at z. + value to evaluate. + the complementary inverse error function evaluated at Z. + + + + The implementation of the inverse error function. + + First intermediate parameter. + Second intermediate parameter. + Third intermediate parameter. + the inverse error function. + + + + Computes the generalized Exponential Integral function (En). + + The argument of the Exponential Integral function. + Integer power of the denominator term. Generalization index. + The value of the Exponential Integral function. + + This implementation of the computation of the Exponential Integral function follows the derivation in + "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by + Dover Publications, New York), Chapters 6, 7, and 26. + AND + "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 + + + for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. + for 0 < x <= 1 uses Taylor series expansion + + Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. + + + + + Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up + to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. + + A value value! for value > 0 + + If you need to multiply or divide various such factorials, consider using the logarithmic version + instead so you can add instead of multiply and subtract instead of divide, and + then exponentiate the result using . This will also circumvent the problem that + factorials become very large even for small parameters. + + + + + + Computes the factorial of an integer. + + + + + Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. + + A value value! for value > 0 + + + + Computes the binomial coefficient: n choose k. + + A nonnegative value n. + A nonnegative value h. + The binomial coefficient: n choose k. + + + + Computes the natural logarithm of the binomial coefficient: ln(n choose k). + + A nonnegative value n. + A nonnegative value h. + The logarithmic binomial coefficient: ln(n choose k). + + + + Computes the multinomial coefficient: n choose n1, n2, n3, ... + + A nonnegative value n. + An array of nonnegative values that sum to . + The multinomial coefficient. + if is . + If or any of the are negative. + If the sum of all is not equal to . + + + + The order of the approximation. + + + + + Auxiliary variable when evaluating the function. + + + + + Polynomial coefficients for the approximation. + + + + + Computes the logarithm of the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. + + + + + Computes the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + + Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. + + + + + Returns the upper incomplete regularized gamma function + Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete regularized gamma function. + + + + Returns the upper incomplete gamma function + Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete gamma function. + + + + Returns the lower incomplete gamma function + gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the lower incomplete regularized gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the inverse P^(-1) of the regularized lower incomplete gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, + such that P^(-1)(a,P(a,x)) == x. + + + + + Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. + This implementation is based on + Jose Bernardo + Algorithm AS 103: + Psi ( Digamma ) Function, + Applied Statistics, + Volume 25, Number 3, 1976, pages 315-317. + Using the modifications as in Tom Minka's lightspeed toolbox. + + The argument of the digamma function. + The value of the DiGamma function at . + + + + Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will + only return solutions that are positive. + This implementation is based on the bisection method. + + The argument of the inverse digamma function. + The positive solution to the inverse DiGamma function at . + + + + Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Rising Factorial for x and n + + + + Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Falling Factorial for x and n + + + + A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. + This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation + see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function + + The list of coefficients in the numerator + The list of coefficients in the denominator + The variable in the power series + The value of the Generalized HyperGeometric Function. + + + + Returns the Hankel function of the first kind. + HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the first kind. + + + + Returns the exponentially scaled Hankel function of the first kind. + ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the first kind. + + + + Returns the Hankel function of the second kind. + HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the second kind. + + + + Returns the exponentially scaled Hankel function of the second kind. + ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the second kind. + + + + Computes the 'th Harmonic number. + + The Harmonic number which needs to be computed. + The t'th Harmonic number. + + + + Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) + + The order parameter. + The power parameter. + General Harmonic number. + + + + Returns the Kelvin function of the first kind. + KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function of the first kind. + + + + Returns the Kelvin function ber. + KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function ber. + KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(x) is equivalent to KelvinBer(0, x). + + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function bei. + KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the Kelvin function bei. + KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBei(x) is equivalent to KelvinBei(0, x). + + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the derivative of the Kelvin function ber. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function ber + + + + Returns the derivative of the Kelvin function ber. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ber. + + + + Returns the derivative of the Kelvin function bei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function bei. + + + + Returns the derivative of the Kelvin function bei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function bei. + + + + Returns the Kelvin function of the second kind + KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). + KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + The order of the Kelvin function. + The value to calculate the kelvin function of, + + + + + Returns the Kelvin function ker. + KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function ker. + KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKer(x) is equivalent to KelvinKer(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function kei. + KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the Kelvin function kei. + KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKei(x) is equivalent to KelvinKei(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the derivative of the Kelvin function ker. + + The order of the Kelvin function. + The non-negative real value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function ker. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function kei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Returns the derivative of the Kelvin function kei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic + + The parameter for which to compute the logistic function. + The logistic function of . + + + + Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit + + The parameter for which to compute the logit function. This number should be + between 0 and 1. + The logarithm of divided by 1.0 - . + + + + ************************************** + COEFFICIENTS FOR METHODS bessi0 * + ************************************** + + Chebyshev coefficients for exp(-x) I0(x) + in the interval [0, 8]. + + lim(x->0){ exp(-x) I0(x) } = 1. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I0(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessi1 * + ************************************** + + Chebyshev coefficients for exp(-x) I1(x) / x + in the interval [0, 8]. + + lim(x->0){ exp(-x) I1(x) / x } = 1/2. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I1(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk0, bessk0e * + ************************************** + + Chebyshev coefficients for K0(x) + log(x/2) I0(x) + in the interval [0, 2]. The odd order coefficients are all + zero; only the even order coefficients are listed. + + lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. + + + + Chebyshev coefficients for exp(x) sqrt(x) K0(x) + in the inverted interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk1, bessk1e * + ************************************** + + Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) + in the interval [0, 2]. + + lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. + + + + Chebyshev coefficients for exp(x) sqrt(x) K1(x) + in the interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). + + + + Returns the modified Bessel function of first kind, order 0 of the argument. +

+ The function is defined as i0(x) = j0( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of first kind, + order 1 of the argument. +

+ The function is defined as i1(x) = -i j1( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of the second kind + of order 0 of the argument. +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 0 of the argument. + + The value to compute the Bessel function of. + + + + Returns the modified Bessel function of the second kind + of order 1 of the argument. +

+ The range is partitioned into the two intervals [0, 2] and + (2, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 1 of the argument. +

+ k1e(x) = exp(x) * k1(x). +

+ The value to compute the Bessel function of. + +
+ + + Returns the modified Struve function of order 0. + + The value to compute the function of. + + + + Returns the modified Struve function of order 1. + + The value to compute the function of. + + + + Returns the difference between the Bessel I0 and Struve L0 functions. + + The value to compute the function of. + + + + Returns the difference between the Bessel I1 and Struve L1 functions. + + The value to compute the function of. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Numerically stable exponential minus one, i.e. x -> exp(x)-1 + + A number specifying a power. + Returns exp(power)-1. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Evaluation functions, useful for function approximation. + + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Numerically stable series summation + + provides the summands sequentially + Sum + + + Evaluates the series of Chebyshev polynomials Ti at argument x/2. + The series is given by +
+                  N-1
+                   - '
+            y  =   >   coef[i] T (x/2)
+                   -            i
+                  i=0
+            
+ Coefficients are stored in reverse order, i.e. the zero + order term is last in the array. Note N is the number of + coefficients, not the order. +

+ If coefficients are for the interval a to b, x must + have been transformed to x -> 2(2x - b - a)/(b-a) before + entering the routine. This maps x from (a, b) to (-1, 1), + over which the Chebyshev polynomials are defined. +

+ If the coefficients are for the inverted interval, in + which (a, b) is mapped to (1/b, 1/a), the transformation + required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, + this becomes x -> 4a/x - 1. +

+ SPEED: +

+ Taking advantage of the recurrence properties of the + Chebyshev polynomials, the routine requires one more + addition per loop than evaluating a nested polynomial of + the same degree. +

+ The coefficients of the polynomial. + Argument to the polynomial. + + Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs +

+ Marked as Deprecated in + http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html + + + +

+ Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. + + The no. of terms in the sequence. + The coefficients of the Chebyshev series, length n+1. + The value at which the series is to be evaluated. + + ORIGINAL AUTHOR: + Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND + REFERENCES: + "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" + J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 + +
+ + + Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. + This function has a global minimum at (1,1) with f(1,1) = 0. + Common range: [-5,10] or [-2.048,2.048]. + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Valley-shaped Rosenbrock function for 2 or more dimensions. + This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 + This function has 4 global minima with f(x,y) = 0. + Common range: [-6,6]. + Named after David Mautner Himmelblau + + + https://en.wikipedia.org/wiki/Himmelblau%27s_function + + + + + Rastrigin, a highly multi-modal function with many local minima. + Global minimum of all zeros with f(0) = 0. + Common range: [-5.12,5.12]. + + + https://en.wikipedia.org/wiki/Rastrigin_function + http://www.sfu.ca/~ssurjano/rastr.html + + + + + Drop-Wave, a multi-modal and highly complex function with many local minima. + Global minimum of all zeros with f(0) = -1. + Common range: [-5.12,5.12]. + + + http://www.sfu.ca/~ssurjano/drop.html + + + + + Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. + Global minimum of all zeros with f(0) = 0. + Common range: [-32.768, 32.768]. + + + http://www.sfu.ca/~ssurjano/ackley.html + + + + + Bowl-shaped first Bohachevsky function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-100, 100] + + + http://www.sfu.ca/~ssurjano/boha.html + + + + + Plate-shaped Matyas function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-10, 10]. + + + http://www.sfu.ca/~ssurjano/matya.html + + + + + Valley-shaped six-hump camel back function. + Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). + Common range: x in [-3,3], y in [-2,2]. + + + http://www.sfu.ca/~ssurjano/camel6.html + + + + + Statistics operating on arrays assumed to be unsorted. + WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. + + + + + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + A class with correlation measures between two datasets. + + + + + Auto-correlation function (ACF) based on FFT for all possible lags k. + + Data array to calculate auto correlation for. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. + + The data array to calculate auto correlation for. + Max lag to calculate ACF for must be positive and smaller than x.Length. + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function based on FFT for lags k. + + The data array to calculate auto correlation for. + Array with lags to calculate ACF for. + An array with the ACF as a function of the lags k. + + + + The internal method for calculating the auto-correlation. + + The data array to calculate auto-correlation for + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length + Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length + An array with the ACF as a function of the lags k. + + + + Computes the Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + The Pearson product-moment correlation coefficient. + + + + Computes the Weighted Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + Corresponding weights of data. + The Weighted Pearson product-moment correlation coefficient. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Array of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Enumerable of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Spearman Ranked Correlation coefficient. + + Sample data series A. + Sample data series B. + The Spearman ranked correlation coefficient. + + + + Computes the Spearman Ranked Correlation matrix. + + Array of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the Spearman Ranked Correlation matrix. + + Enumerable of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the basic statistics of data set. The class meets the + NIST standard of accuracy for mean, variance, and standard deviation + (the only statistics they provide exact values for) and exceeds them + in increased accuracy mode. + Recommendation: consider to use RunningStatistics instead. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Gets the size of the sample. + + The size of the sample. + + + + Gets the sample mean. + + The sample mean. + + + + Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). + + The sample variance. + + + + Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). + + The sample standard deviation. + + + + Gets the sample skewness. + + The sample skewness. + Returns zero if is less than three. + + + + Gets the sample kurtosis. + + The sample kurtosis. + Returns zero if is less than four. + + + + Gets the maximum sample value. + + The maximum sample value. + + + + Gets the minimum sample value. + + The minimum sample value. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Internal use. Method use for setting the statistics. + + For setting Mean. + For setting Variance. + For setting Skewness. + For setting Kurtosis. + For setting Minimum. + For setting Maximum. + For setting Count. + + + + A consists of a series of s, + each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + This IComparer performs comparisons between a point and a bucket. + + + + + Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. + + The first bucket to compare. + The second bucket to compare. + -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. + + + + Lower Bound of the Bucket. + + + + + Upper Bound of the Bucket. + + + + + The number of datapoints in the bucket. + + + Value may be NaN if this was constructed as a argument. + + + + + Initializes a new instance of the Bucket class. + + + + + Constructs a Bucket that can be used as an argument for a + like when performing a Binary search. + + Value to look for + + + + Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. + + A cloned Bucket object. + + + + Width of the Bucket. + + + + + True if this is a single point argument for + when performing a Binary search. + + + + + Default comparer. + + + + + This method check whether a point is contained within this bucket. + + The point to check. + + 0 if the point falls within the bucket boundaries; + -1 if the point is smaller than the bucket, + +1 if the point is larger than the bucket. + + + + Comparison of two disjoint buckets. The buckets cannot be overlapping. + + + 0 if UpperBound and LowerBound are bit-for-bit equal + 1 if This bucket is lower that the compared bucket + -1 otherwise + + + + + Checks whether two Buckets are equal. + + + UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a + difference in Count given by . + + + + + Provides a hash code for this bucket. + + + + + Formats a human-readable string for this bucket. + + + + + A class which computes histograms of data. + + + + + Contains all the Buckets of the Histogram. + + + + + Indicates whether the elements of buckets are currently sorted. + + + + + Initializes a new instance of the Histogram class. + + + + + Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram + will be set to the smallest and largest datapoint. + + The data sequence to build a histogram on. + The number of buckets to use. + + + + Constructs a Histogram with a specific number of equally sized buckets. + + The data sequence to build a histogram on. + The number of buckets to use. + The histogram lower bound. + The histogram upper bound. + + + + Add one data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The datapoint which we want to add. + + + + Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The sequence of datapoints which we want to add. + + + + Adds a Bucket to the Histogram. + + + + + Sort the buckets if needed. + + + + + Returns the Bucket that contains the value v. + + The point to search the bucket for. + A copy of the bucket containing point . + + + + Returns the index in the Histogram of the Bucket + that contains the value v. + + The point to search the bucket index for. + The index of the bucket containing the point. + + + + Returns the lower bound of the histogram. + + + + + Returns the upper bound of the histogram. + + + + + Gets the n'th bucket. + + The index of the bucket to be returned. + A copy of the n'th bucket. + + + + Gets the number of buckets. + + + + + Gets the total number of datapoints in the histogram. + + + + + Prints the buckets contained in the . + + + + + Kernel density estimation (KDE). + + + + + Estimate the probability density function of a random variable. + + + The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. + + + + + Estimate the probability density function of a random variable with a Gaussian kernel. + + + + + Estimate the probability density function of a random variable with an Epanechnikov kernel. + The Epanechnikov kernel is optimal in a mean square error sense. + + + + + Estimate the probability density function of a random variable with a uniform kernel. + + + + + Estimate the probability density function of a random variable with a triangular kernel. + + + + + A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). + This kernel is the default. + + + + + Epanechnikov Kernel: + x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 + + + + + Uniform Kernel: + x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 + + + + + Triangular Kernel: + x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 + + + + + A hybrid Monte Carlo sampler for multivariate distributions. + + + + + Number of parameters in the density function. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of different components of the + momentum. + + + + + Gets or sets the standard deviations used in the sampling of different components of the + momentum. + + When the length of pSdv is not the same as Length. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + 1 using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the a random number generator provided by the user. + A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviations + given by pSdv. This constructor will set the burn interval, the method used for + numerical differentiation and the random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + The method used for numerical differentiation. + When the number of burnInterval iteration is negative. + When the length of pSdv is not the same as x0. + + + + Initialize parameters. + + The current location of the sampler. + + + + Checking that the location and the momentum are of the same dimension and that each component is positive. + + The standard deviations used for sampling the momentum. + When the length of pSdv is not the same as Length or if any + component is negative. + When pSdv is null. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the gradient. Uses a simple three point estimation. + + Function which the gradient is to be evaluated. + The location where the gradient is to be evaluated. + The gradient of the function at the point x. + + + + The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set + of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as + a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used + to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler + (). + + The type of samples this sampler produces. + + + + The delegate type that defines a derivative evaluated at a certain point. + + Function to be differentiated. + Value where the derivative is computed. + + + + Evaluates the energy function of the target distribution. + + + + + The current location of the sampler. + + + + + The number of burn iterations between two samples. + + + + + The size of each step in the Hamiltonian equation. + + + + + The number of iterations in the Hamiltonian equation. + + + + + The algorithm used for differentiation. + + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the number of iterations in the Hamiltonian equation. + + When frog leap steps is negative or zero. + + + + Gets or sets the size of each step in the Hamiltonian equation. + + When step size is negative or zero. + + + + Constructs a new Hybrid Monte Carlo sampler. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + Random number generator used for sampling the momentum. + The method used for differentiation. + When the number of burnInterval iteration is negative. + When either x0, pdfLnP or diff is null. + + + + Returns a sample from the distribution P. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Method used to update the sample location. Used in the end of the loop. + + The old energy. + The old gradient/derivative of the energy. + The new sample. + The new gradient/derivative of the energy. + The new energy. + The difference between the old Hamiltonian and new Hamiltonian. Use to determine + if an update should take place. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Method for doing dot product. + + First vector/scalar in the product. + Second vector/scalar in the product. + + + + Method for adding, multiply the second vector/scalar by factor and then + add it to the first vector/scalar. + + First vector/scalar. + Scalar factor multiplying by the second vector/scalar. + Second vector/scalar. + + + + Multiplying the second vector/scalar by factor and then subtract it from + the first vector/scalar. + + First vector/scalar. + Scalar factor to be multiplied to the second vector/scalar. + Second vector/scalar. + + + + Method for sampling a random momentum. + + Momentum to be randomized. + + + + The Hamiltonian equations that is used to produce the new sample. + + + + + Method to compute the Hamiltonian used in the method. + + The momentum. + The energy. + Hamiltonian=E+p.p/2 + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than or equal to zero. + Throws when value is negative. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than to zero. + Throws when value is negative or zero. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than zero. + Throws when value is negative or zero. + + + + Provides utilities to analysis the convergence of a set of samples from + a . + + + + + Computes the auto correlations of a series evaluated by a function f. + + The series for computing the auto correlation. + The lag in the series + The function used to evaluate the series. + The auto correlation. + Throws if lag is zero or if lag is + greater than or equal to the length of Series. + + + + Computes the effective size of the sample when evaluated by a function f. + + The samples. + The function use for evaluating the series. + The effective size when auto correlation is taken into account. + + + + A method which samples datapoints from a proposal distribution. The implementation of this sampler + is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it doesn't take any parameters; it samples random + variables from the whole domain. + + The type of the datapoints. + A sample from the proposal distribution. + + + + A method which samples datapoints from a proposal distribution given an initial sample. The implementation + of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it samples locally around an initial point. In other words, it + makes a small local move rather than producing a global sample from the proposal. + + The type of the datapoints. + The initial sample. + A sample from the proposal distribution. + + + + A function which evaluates a density. + + The type of data the distribution is over. + The sample we want to evaluate the density for. + + + + A function which evaluates a log density. + + The type of data the distribution is over. + The sample we want to evaluate the log density for. + + + + A function which evaluates the log of a transition kernel probability. + + The type for the space over which this transition kernel is defined. + The new state in the transition. + The previous state in the transition. + The log probability of the transition. + + + + The interface which every sampler must implement. + + The type of samples this sampler produces. + + + + The random number generator for this class. + + + + + Keeps track of the number of accepted samples. + + + + + Keeps track of the number of calls to the proposal sampler. + + + + + Initializes a new instance of the class. + + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Gets or sets the random number generator. + + When the random number generator is null. + + + + Returns one sample. + + + + + Returns a number of samples. + + The number of samples we want. + An array of samples. + + + + Gets the acceptance rate of the sampler. + + + + + Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the + proposal distribution Q is symmetric in comparison to . It does need to + be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. + + The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the target distribution. + + + + + Evaluates the log transition probability for the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis-Hastings sampler using the default random number generator. This + constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + The log transition probability for the proposal distribution. + A method that samples from the proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal + distribution Q is symmetric. All densities are required to be in log space. + + The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the sampling distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis sampler using the default random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + A method that samples from the symmetric proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to + to be normalized, but we do need that for each x, P(x) < Q(x). + + The type of samples this sampler produces. + + + + Evaluates the density function of the sampling distribution. + + + + + Evaluates the density function of the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + Constructs a new rejection sampler using the default random number generator. + + The density of the distribution we want to sample from. + The density of the proposal distribution. + A method that samples from the proposal distribution. + + + + Returns a sample from the distribution P. + + When the algorithms detects that the proposal + distribution doesn't upper bound the target distribution. + + + + A hybrid Monte Carlo sampler for univariate distributions. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of the + momentum. + + + + + Gets or sets the standard deviation used in the sampling of the + momentum. + + When standard deviation is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using a random + number generator provided by the user. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + Random number generator used to sample the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + given by pSdv using a random + number generator provided by the user. This constructor will set both the burn interval and the method used for + numerical differentiation. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + The method used for numerical differentiation. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the derivative. Uses a simple three point estimation. + + Function for which the derivative is to be evaluated. + The location where the derivative is to be evaluated. + The derivative of the function at the point x. + + + + Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using + a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. + + The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + + + + Evaluates the log density function of the target distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + The scale of the slice sampler. + + + + + Constructs a new Slice sampler using the default random + number generator. The burn interval will be set to 0. + + The initial sample. + The density of the distribution we want to sample from. + The scale factor of the slice sampler. + When the scale of the slice sampler is not positive. + + + + Constructs a new slice sampler using the default random number generator. It + will set the number of burnInterval iterations and run a burnInterval phase. + + The initial sample. + The density of the distribution we want to sample from. + The number of iterations in between returning samples. + The scale factor of the slice sampler. + When the number of burnInterval iteration is negative. + When the scale of the slice sampler is not positive. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the scale of the slice sampler. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Running statistics over a window of data, allows updating by adding values. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + Replace ties with their mean (non-integer ranks). Default. + + + Replace ties with their minimum (typical sports ranking). + + + Replace ties with their maximum. + + + Permutation with increasing values at each index of ties. + + + + Running statistics accumulator, allows updating by adding values + or by combining two accumulators. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Evaluates the population skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + + + + Evaluates the population kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + + Create a new running statistics over the combined samples of two existing running statistics. + + + + + Statistics operating on an array already sorted ascendingly. + + + + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Extension methods to return basic statistics on set of data. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + The full population data. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + The full population data. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. + Uses a normalizer (Bessel's correction; type 2). + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness and kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + + The full population data. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + The full population data. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + Null-entries are ignored. + + The data sample sequence. + + + + Evaluates the sample mean over a moving window, for each samples. + Returns NaN if no data is empty or if any entry is NaN. + + The sample stream to calculate the mean of. + The number of last samples to consider. + + + + Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. + Can be used in a streaming way, e.g. on large datasets not fitting into memory. + + + + + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Calculates the entropy of a stream of double values. + Returns NaN if any of the values in the stream are NaN. + + The input stream to evaluate. + + + + + Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. + + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The body to be invoked for each iteration range. + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The partition size for splitting work into smaller pieces. + The body to be invoked for each iteration range. + + + + Executes each of the provided actions inside a discrete, asynchronous task. + + An array of actions to execute. + The actions array contains a null element. + At least one invocation of the actions threw an exception. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Double-precision trigonometry toolkit. + + + + + Constant to convert a degree to grad. + + + + + Converts a degree (360-periodic) angle to a grad (400-periodic) angle. + + The degree to convert. + The converted grad angle. + + + + Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. + + The degree to convert. + The converted radian angle. + + + + Converts a grad (400-periodic) angle to a degree (360-periodic) angle. + + The grad to convert. + The converted degree. + + + + Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. + + The grad to convert. + The converted radian. + + + + Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. + + The radian to convert. + The converted degree. + + + + Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. + + The radian to convert. + The converted grad. + + + + Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). + + + + + Trigonometric Sine of an angle in radian, or opposite / hypotenuse. + + The angle in radian. + The sine of the radian angle. + + + + Trigonometric Sine of a Complex number. + + The complex value. + The sine of the complex number. + + + + Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. + + The angle in radian. + The cosine of an angle in radian. + + + + Trigonometric Cosine of a Complex number. + + The complex value. + The cosine of a complex number. + + + + Trigonometric Tangent of an angle in radian, or opposite / adjacent. + + The angle in radian. + The tangent of the radian angle. + + + + Trigonometric Tangent of a Complex number. + + The complex value. + The tangent of the complex number. + + + + Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. + + The angle in radian. + The cotangent of an angle in radian. + + + + Trigonometric Cotangent of a Complex number. + + The complex value. + The cotangent of the complex number. + + + + Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. + + The angle in radian. + The secant of the radian angle. + + + + Trigonometric Secant of a Complex number. + + The complex value. + The secant of the complex number. + + + + Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. + + The angle in radian. + Cosecant of an angle in radian. + + + + Trigonometric Cosecant of a Complex number. + + The complex value. + The cosecant of a complex number. + + + + Trigonometric principal Arc Sine in radian + + The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Sine of this Complex number. + + The complex value. + The arc sine of a complex number. + + + + Trigonometric principal Arc Cosine in radian + + The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Cosine of this Complex number. + + The complex value. + The arc cosine of a complex number. + + + + Trigonometric principal Arc Tangent in radian + + The opposite for a unit adjacent (i.e. opposite / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Tangent of this Complex number. + + The complex value. + The arc tangent of a complex number. + + + + Trigonometric principal Arc Cotangent in radian + + The adjacent for a unit opposite (i.e. adjacent / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cotangent of this Complex number. + + The complex value. + The arc cotangent of a complex number. + + + + Trigonometric principal Arc Secant in radian + + The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Secant of this Complex number. + + The complex value. + The arc secant of a complex number. + + + + Trigonometric principal Arc Cosecant in radian + + The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cosecant of this Complex number. + + The complex value. + The arc cosecant of a complex number. + + + + Hyperbolic Sine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic sine of the angle. + + + + Hyperbolic Sine of a Complex number. + + The complex value. + The hyperbolic sine of a complex number. + + + + Hyperbolic Cosine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic Cosine of the angle. + + + + Hyperbolic Cosine of a Complex number. + + The complex value. + The hyperbolic cosine of a complex number. + + + + Hyperbolic Tangent in radian + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic tangent of the angle. + + + + Hyperbolic Tangent of a Complex number. + + The complex value. + The hyperbolic tangent of a complex number. + + + + Hyperbolic Cotangent + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cotangent of the angle. + + + + Hyperbolic Cotangent of a Complex number. + + The complex value. + The hyperbolic cotangent of a complex number. + + + + Hyperbolic Secant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic secant of the angle. + + + + Hyperbolic Secant of a Complex number. + + The complex value. + The hyperbolic secant of a complex number. + + + + Hyperbolic Cosecant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cosecant of the angle. + + + + Hyperbolic Cosecant of a Complex number. + + The complex value. + The hyperbolic cosecant of a complex number. + + + + Hyperbolic Area Sine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Sine of this Complex number. + + The complex value. + The hyperbolic arc sine of a complex number. + + + + Hyperbolic Area Cosine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosine of this Complex number. + + The complex value. + The hyperbolic arc cosine of a complex number. + + + + Hyperbolic Area Tangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Tangent of this Complex number. + + The complex value. + The hyperbolic arc tangent of a complex number. + + + + Hyperbolic Area Cotangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cotangent of this Complex number. + + The complex value. + The hyperbolic arc cotangent of a complex number. + + + + Hyperbolic Area Secant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Secant of this Complex number. + + The complex value. + The hyperbolic arc secant of a complex number. + + + + Hyperbolic Area Cosecant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosecant of this Complex number. + + The complex value. + The hyperbolic arc cosecant of a complex number. + + + + Hamming window. Named after Richard Hamming. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hamming window. Named after Richard Hamming. + Periodic version, useful e.g. for FFT purposes. + + + + + Hann window. Named after Julius von Hann. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hann window. Named after Julius von Hann. + Periodic version, useful e.g. for FFT purposes. + + + + + Cosine window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Cosine window. + Periodic version, useful e.g. for FFT purposes. + + + + + Lanczos window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Lanczos window. + Periodic version, useful e.g. for FFT purposes. + + + + + Gauss window. + + + + + Blackman window. + + + + + Blackman-Harris window. + + + + + Blackman-Nuttall window. + + + + + Bartlett window. + + + + + Bartlett-Hann window. + + + + + Nuttall window. + + + + + Flat top window. + + + + + Uniform rectangular (Dirichlet) window. + + + + + Triangular window. + + + + + Tukey tapering window. A rectangular window bounded + by half a cosine window on each side. + + Width of the window + Fraction of the window occupied by the cosine parts + +
+
diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.dll new file mode 100755 index 0000000..706a8ae Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.dll differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.xml new file mode 100755 index 0000000..5f9e8af --- /dev/null +++ b/oscardata/packages/MathNet.Numerics.4.12.0/lib/net461/MathNet.Numerics.xml @@ -0,0 +1,57152 @@ + + + + MathNet.Numerics + + + + + Useful extension methods for Arrays. + + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Enumerative Combinatorics and Counting. + + + + + Count the number of possible variations without repetition. + The order matters and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of distinct variations. + + + + Count the number of possible variations with repetition. + The order matters and each object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of distinct variations with repetition. + + + + Count the number of possible combinations without repetition. + The order does not matter and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of combinations. + + + + Count the number of possible combinations with repetition. + The order does not matter and an object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of combinations with repetition. + + + + Count the number of possible permutations (without repetition). + + Number of (distinguishable) elements in the set. + Maximum number of permutations without repetition. + + + + Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. + Implemented using Fisher-Yates Shuffling. + + An array of length N that contains (in any order) the integers of the interval [0, N). + Number of (distinguishable) elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation, without repetition, from a data array by reordering the provided array in-place. + Implemented using Fisher-Yates Shuffling. The provided data array will be modified. + + The data array to be reordered. The array will be modified by this routine. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation from a data sequence by returning the provided data in random order. + Implemented using Fisher-Yates Shuffling. + + The data elements to be reordered. + The random number generator to use. Optional; the default random source will be used if null. + + + + Generate a random combination, without repetition, by randomly selecting some of N elements. + + Number of elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Generate a random combination, without repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Select a random combination, without repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination, in the original order. + + + + Generates a random combination, with repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + Integer mask array of length N, for each item the number of times it was selected. + + + + Select a random combination, with repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination with repetition, in the original order. + + + + Generate a random variation, without repetition, by randomly selecting k of n elements with order. + Implemented using partial Fisher-Yates Shuffling. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. + Implemented using partial Fisher-Yates Shuffling. + + The data source to choose from. + Number of elements (k) to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation, in random order. + + + + Generate a random variation, with repetition, by randomly selecting k of n elements with order. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation with repetition, in random order. + + + + 32-bit single precision complex numbers class. + + + + The class Complex32 provides all elementary operations + on complex numbers. All the operators +, -, + *, /, ==, != are defined in the + canonical way. Additional complex trigonometric functions + are also provided. Note that the Complex32 structures + has two special constant values and + . + + + + Complex32 x = new Complex32(1f,2f); + Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); + Complex32 z = (x + y) / (x - y); + + + + For mathematical details about complex numbers, please + have a look at the + Wikipedia + + + + + + The real component of the complex number. + + + + + The imaginary component of the complex number. + + + + + Initializes a new instance of the Complex32 structure with the given real + and imaginary parts. + + The value for the real component. + The value for the imaginary component. + + + + Creates a complex number from a point's polar coordinates. + + A complex number. + The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. + The phase, which is the angle from the line to the horizontal axis, measured in radians. + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to one and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to one. + + + + + Returns a new instance + with real and imaginary numbers positive infinite. + + + + + Returns a new instance + with real and imaginary numbers not a number. + + + + + Gets the real component of the complex number. + + The real component of the complex number. + + + + Gets the real imaginary component of the complex number. + + The real imaginary component of the complex number. + + + + Gets the phase or argument of this Complex32. + + + Phase always returns a value bigger than negative Pi and + smaller or equal to Pi. If this Complex32 is zero, the Complex32 + is assumed to be positive real with an argument of zero. + + The phase or argument of this Complex32 + + + + Gets the magnitude (or absolute value) of a complex number. + + Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN + The magnitude of the current instance. + + + + Gets the squared magnitude (or squared absolute value) of a complex number. + + The squared magnitude of the current instance. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex32. + + + + Gets a value indicating whether the Complex32 is zero. + + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + + true if this instance is ; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + + true if this instance is real nonnegative number; otherwise, false. + + + + + Exponential of this Complex32 (exp(x), E^x). + + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex32 (Base E). + + The natural logarithm of this complex number. + + + + Common Logarithm of this Complex32 (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex32 with custom base. + + The logarithm of this complex number. + + + + Raise this Complex32 to the given value. + + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex32 to the inverse of the given value. + + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex32 + + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex32 + + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex32. + + + + + Evaluate all cubic roots of this Complex32. + + + + + Equality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real and imaginary components of the two complex numbers are equal; false otherwise. + + + + Inequality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real or imaginary components of the two complex numbers are not equal; false otherwise. + + + + Unary addition. + + The complex number to operate on. + Returns the same complex number. + + + + Unary minus. + + The complex number to operate on. + The negated value of the . + + + Addition operator. Adds two complex numbers together. + The result of the addition. + One of the complex numbers to add. + The other complex numbers to add. + + + Subtraction operator. Subtracts two complex numbers. + The result of the subtraction. + The complex number to subtract from. + The complex number to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The complex numbers to add. + The float value to add. + + + Subtraction operator. Subtracts float value from a complex value. + The result of the subtraction. + The complex number to subtract from. + The float value to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The float value to add. + The complex numbers to add. + + + Subtraction operator. Subtracts complex value from a float value. + The result of the subtraction. + The float vale to subtract from. + The complex value to subtract. + + + Multiplication operator. Multiplies two complex numbers. + The result of the multiplication. + One of the complex numbers to multiply. + The other complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The float value to multiply. + The complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The complex number to multiply. + The float value to multiply. + + + Division operator. Divides a complex number by another. + Enhanced Smith's algorithm for dividing two complex numbers + + The result of the division. + The dividend. + The divisor. + + + + Helper method for dividing. + + Re first + Im first + Re second + Im second + + + + + Division operator. Divides a float value by a complex number. + Algorithm based on Smith's algorithm + + The result of the division. + The dividend. + The divisor. + + + Division operator. Divides a complex number by a float value. + The result of the division. + The dividend. + The divisor. + + + + Computes the conjugate of a complex number and returns the result. + + + + + Returns the multiplicative inverse of a complex number. + + + + + Converts the value of the current complex number to its equivalent string representation in Cartesian form. + + The string representation of the current instance in Cartesian form. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format for its real and imaginary parts. + + The string representation of the current instance in Cartesian form. + A standard or custom numeric format string. + + is not a valid format string. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified culture-specific formatting information. + + The string representation of the current instance in Cartesian form, as specified by . + An object that supplies culture-specific formatting information. + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. + The string representation of the current instance in Cartesian form, as specified by and . + A standard or custom numeric format string. + An object that supplies culture-specific formatting information. + + is not a valid format string. + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + The hash code for the complex number. + + + The hash code of the complex number. + + + The hash code is calculated as + System.Math.Exp(ComplexMath.Absolute(complexNumber)). + + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as float. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Explicit conversion of a real decimal to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Explicit conversion of a Complex to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Implicit conversion of a real byte to a Complex32. + + The byte value to convert. + The result of the conversion. + + + + Implicit conversion of a real short to a Complex32. + + The short value to convert. + The result of the conversion. + + + + Implicit conversion of a signed byte to a Complex32. + + The signed byte value to convert. + The result of the conversion. + + + + Implicit conversion of a unsigned real short to a Complex32. + + The unsigned short value to convert. + The result of the conversion. + + + + Implicit conversion of a real int to a Complex32. + + The int value to convert. + The result of the conversion. + + + + Implicit conversion of a BigInteger int to a Complex32. + + The BigInteger value to convert. + The result of the conversion. + + + + Implicit conversion of a real long to a Complex32. + + The long value to convert. + The result of the conversion. + + + + Implicit conversion of a real uint to a Complex32. + + The uint value to convert. + The result of the conversion. + + + + Implicit conversion of a real ulong to a Complex32. + + The ulong value to convert. + The result of the conversion. + + + + Implicit conversion of a real float to a Complex32. + + The float value to convert. + The result of the conversion. + + + + Implicit conversion of a real double to a Complex32. + + The double value to convert. + The result of the conversion. + + + + Converts this Complex32 to a . + + A with the same values as this Complex32. + + + + Returns the additive inverse of a specified complex number. + + The result of the real and imaginary components of the value parameter multiplied by -1. + A complex number. + + + + Computes the conjugate of a complex number and returns the result. + + The conjugate of . + A complex number. + + + + Adds two complex numbers and returns the result. + + The sum of and . + The first complex number to add. + The second complex number to add. + + + + Subtracts one complex number from another and returns the result. + + The result of subtracting from . + The value to subtract from (the minuend). + The value to subtract (the subtrahend). + + + + Returns the product of two complex numbers. + + The product of the and parameters. + The first complex number to multiply. + The second complex number to multiply. + + + + Divides one complex number by another and returns the result. + + The quotient of the division. + The complex number to be divided. + The complex number to divide by. + + + + Returns the multiplicative inverse of a complex number. + + The reciprocal of . + A complex number. + + + + Returns the square root of a specified complex number. + + The square root of . + A complex number. + + + + Gets the absolute value (or magnitude) of a complex number. + + The absolute value of . + A complex number. + + + + Returns e raised to the power specified by a complex number. + + The number e raised to the power . + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a complex number. + + The complex number raised to the power . + A complex number to be raised to a power. + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a single-precision floating-point number. + + The complex number raised to the power . + A complex number to be raised to a power. + A single-precision floating-point number that specifies a power. + + + + Returns the natural (base e) logarithm of a specified complex number. + + The natural (base e) logarithm of . + A complex number. + + + + Returns the logarithm of a specified complex number in a specified base. + + The logarithm of in base . + A complex number. + The base of the logarithm. + + + + Returns the base-10 logarithm of a specified complex number. + + The base-10 logarithm of . + A complex number. + + + + Returns the sine of the specified complex number. + + The sine of . + A complex number. + + + + Returns the cosine of the specified complex number. + + The cosine of . + A complex number. + + + + Returns the tangent of the specified complex number. + + The tangent of . + A complex number. + + + + Returns the angle that is the arc sine of the specified complex number. + + The angle which is the arc sine of . + A complex number. + + + + Returns the angle that is the arc cosine of the specified complex number. + + The angle, measured in radians, which is the arc cosine of . + A complex number that represents a cosine. + + + + Returns the angle that is the arc tangent of the specified complex number. + + The angle that is the arc tangent of . + A complex number. + + + + Returns the hyperbolic sine of the specified complex number. + + The hyperbolic sine of . + A complex number. + + + + Returns the hyperbolic cosine of the specified complex number. + + The hyperbolic cosine of . + A complex number. + + + + Returns the hyperbolic tangent of the specified complex number. + + The hyperbolic tangent of . + A complex number. + + + + Extension methods for the Complex type provided by System.Numerics + + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex. + + + + Gets the conjugate of the Complex number. + + The number to perform this operation on. + + The semantic of setting the conjugate is such that + + // a, b of type Complex32 + a.Conjugate = b; + + is equivalent to + + // a, b of type Complex32 + a = b.Conjugate + + + The conjugate of the number. + + + + Returns the multiplicative inverse of a complex number. + + + + + Exponential of this Complex (exp(x), E^x). + + The number to perform this operation on. + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex (Base E). + + The number to perform this operation on. + + The natural logarithm of this complex number. + + + + + Common Logarithm of this Complex (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex with custom base. + + The logarithm of this complex number. + + + + Raise this Complex to the given value. + + The number to perform this operation on. + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex to the inverse of the given value. + + The number to perform this operation on. + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex + + The number to perform this operation on. + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex + + The number to perform this operation on. + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex. + + + + + Evaluate all cubic roots of this Complex. + + + + + Gets a value indicating whether the Complex32 is zero. + + The number to perform this operation on. + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + The number to perform this operation on. + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + The number to perform this operation on. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + The number to perform this operation on. + + true if this instance is NaN; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + The number to perform this operation on. + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + The number to perform this operation on. + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + The number to perform this operation on. + + true if this instance is real nonnegative number; otherwise, false. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + The string to parse. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as double. + + + + + Converts the string representation of a complex number to a double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + A collection of frequently used mathematical constants. + + + + The number e + + + The number log[2](e) + + + The number log[10](e) + + + The number log[e](2) + + + The number log[e](10) + + + The number log[e](pi) + + + The number log[e](2*pi)/2 + + + The number 1/e + + + The number sqrt(e) + + + The number sqrt(2) + + + The number sqrt(3) + + + The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 + + + The number sqrt(3)/2 + + + The number pi + + + The number pi*2 + + + The number pi/2 + + + The number pi*3/2 + + + The number pi/4 + + + The number sqrt(pi) + + + The number sqrt(2pi) + + + The number sqrt(pi/2) + + + The number sqrt(2*pi*e) + + + The number log(sqrt(2*pi)) + + + The number log(sqrt(2*pi*e)) + + + The number log(2 * sqrt(e / pi)) + + + The number 1/pi + + + The number 2/pi + + + The number 1/sqrt(pi) + + + The number 1/sqrt(2pi) + + + The number 2/sqrt(pi) + + + The number 2 * sqrt(e / pi) + + + The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). + + + + + The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). + + + + + The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). + + + The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. + + + The Catalan constant + Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } + + + The Euler-Mascheroni constant + lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } + + + The number (1+sqrt(5))/2, also known as the golden ratio + + + The Glaisher constant + e^(1/12 - Zeta(-1)) + + + The Khinchin constant + prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} + + + + The size of a double in bytes. + + + + + The size of an int in bytes. + + + + + The size of a float in bytes. + + + + + The size of a Complex in bytes. + + + + + The size of a Complex in bytes. + + + + Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) + + + Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) + + + Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) + + + Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) + + + Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) + + + Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) + + + Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) + + + Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) + + + Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) + + + Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) + + + Planck time: t_p = l_p/c_0 [s] (2007 CODATA) + + + Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) + + + Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) + + + Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) + + + Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) + + + Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) + + + Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) + + + Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) + + + Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) + + + Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) + + + Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) + + + Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) + + + Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) + + + Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) + + + Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) + + + Electron Mass: [kg] (2007 CODATA) + + + Electron Mass Energy Equivalent: [J] (2007 CODATA) + + + Electron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Electron Compton Wavelength: [m] (2007 CODATA) + + + Classical Electron Radius: [m] (2007 CODATA) + + + Thomson Cross Section: [m^2] (2002 CODATA) + + + Electron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Electon G-Factor: [1] (2007 CODATA) + + + Muon Mass: [kg] (2007 CODATA) + + + Muon Mass Energy Equivalent: [J] (2007 CODATA) + + + Muon Molar Mass: [kg mol^-1] (2007 CODATA) + + + Muon Compton Wavelength: [m] (2007 CODATA) + + + Muon Magnetic Moment: [J T^-1] (2007 CODATA) + + + Muon G-Factor: [1] (2007 CODATA) + + + Tau Mass: [kg] (2007 CODATA) + + + Tau Mass Energy Equivalent: [J] (2007 CODATA) + + + Tau Molar Mass: [kg mol^-1] (2007 CODATA) + + + Tau Compton Wavelength: [m] (2007 CODATA) + + + Proton Mass: [kg] (2007 CODATA) + + + Proton Mass Energy Equivalent: [J] (2007 CODATA) + + + Proton Molar Mass: [kg mol^-1] (2007 CODATA) + + + Proton Compton Wavelength: [m] (2007 CODATA) + + + Proton Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton G-Factor: [1] (2007 CODATA) + + + Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Neutron Mass: [kg] (2007 CODATA) + + + Neutron Mass Energy Equivalent: [J] (2007 CODATA) + + + Neutron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Neuron Compton Wavelength: [m] (2007 CODATA) + + + Neutron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Neutron G-Factor: [1] (2007 CODATA) + + + Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Deuteron Mass: [kg] (2007 CODATA) + + + Deuteron Mass Energy Equivalent: [J] (2007 CODATA) + + + Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Helion Mass: [kg] (2007 CODATA) + + + Helion Mass Energy Equivalent: [J] (2007 CODATA) + + + Helion Molar Mass: [kg mol^-1] (2007 CODATA) + + + Avogadro constant: [mol^-1] (2010 CODATA) + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 + + + The SI prefix factor corresponding to 1 000 + + + The SI prefix factor corresponding to 100 + + + The SI prefix factor corresponding to 10 + + + The SI prefix factor corresponding to 0.1 + + + The SI prefix factor corresponding to 0.01 + + + The SI prefix factor corresponding to 0.001 + + + The SI prefix factor corresponding to 0.000 001 + + + The SI prefix factor corresponding to 0.000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 + + + + Sets parameters for the library. + + + + + Use a specific provider if configured, e.g. using + environment variables, or fall back to the best providers. + + + + + Use the best provider available. + + + + + Use the Intel MKL native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Use the Intel MKL native provider for linear algebra, with the specified configuration parameters. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the Intel MKL native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Use the Nvidia CUDA native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the Nvidia CUDA native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Use the OpenBLAS native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the OpenBLAS native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Try to use any available native provider in an undefined order. + + + True if one of the native providers was found and successfully initialized. + False if it failed and the previous provider is still active. + + + + + Gets or sets a value indicating whether the distribution classes check validate each parameter. + For the multivariate distributions this could involve an expensive matrix factorization. + The default setting of this property is true. + + + + + Gets or sets a value indicating whether to use thread safe random number generators (RNG). + Thread safe RNG about two and half time slower than non-thread safe RNG. + + + true to use thread safe random number generators ; otherwise, false. + + + + + Optional path to try to load native provider binaries from. + + + + + Gets or sets a value indicating how many parallel worker threads shall be used + when parallelization is applicable. + + Default to the number of processor cores, must be between 1 and 1024 (inclusive). + + + + Gets or sets the TaskScheduler used to schedule the worker tasks. + + + + + Gets or sets the order of the matrix when linear algebra provider + must calculate multiply in parallel threads. + + The order. Default 64, must be at least 3. + + + + Gets or sets the number of elements a vector or matrix + must contain before we multiply threads. + + Number of elements. Default 300, must be at least 3. + + + + Numerical Derivative. + + + + + Initialized a NumericalDerivative with the given points and center. + + + + + Initialized a NumericalDerivative with the default points and center for the given order. + + + + + Evaluates the derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + Derivative order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Univariate function handle. + Derivative order. + + + + Evaluates the first derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the first derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the second derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the second derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + + + + Evaluates the partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + + + + Class to calculate finite difference coefficients using Taylor series expansion method. + + + For n points, coefficients are calculated up to the maximum derivative order possible (n-1). + The current function value position specifies the "center" for surrounding coefficients. + Selecting the first, middle or last positions represent forward, backwards and central difference methods. + + + + + + + Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. + + + + + Initializes a new instance of the class. + + Number of finite difference coefficients. + + + + Gets the finite difference coefficients for a specified center and order. + + Current function position with respect to coefficients. Must be within point range. + Order of finite difference coefficients. + Vector of finite difference coefficients. + + + + Gets the finite difference coefficients for all orders at a specified center. + + Current function position with respect to coefficients. Must be within point range. + Rectangular array of coefficients, with columns specifying order. + + + + Type of finite different step size. + + + + + The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. + + + + + A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however + this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the + function input parameter and not the order of the finite difference derivative. + + + + + A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order + and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a + base step size, h, that is equivalent to scaling. This step size is then scaled according to the function + input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). + + + + + Class to evaluate the numerical derivative of a function using finite difference approximations. + Variable point and center methods can be initialized . + This class can also be used to return function handles (delegates) for a fixed derivative order and variable. + It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. + + + + + Initializes a NumericalDerivative class with the default 3 point center difference method. + + + + + Initialized a NumericalDerivative class. + + Number of points for finite difference derivatives. + Location of the center with respect to other points. Value ranges from zero to points-1. + + + + Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. + If the base step size used in scaling is desired, see . + + + Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a + base-2 representable number to improve finite difference accuracy. + + + + + Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. + However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. + + + + + Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. + By default this is set to machine epsilon, from which is computed. + + + + + Sets and gets the location of the center point for the finite difference derivative. + + + + + Number of times a function is evaluated for numerical derivatives. + + + + + Type of step size for computing finite differences. If set to absolute, dx = h. + If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when + h is approximately equal to the square-root of machine accuracy, epsilon. + + + + + Evaluates the derivative of equidistant points using the finite difference method. + + Vector of points StepSize apart. + Derivative order. + Finite difference step size. + Derivative of points of the specified order. + + + + Evaluates the derivative of a scalar univariate function. + + + Supplying the optional argument currentValue will reduce the number of function evaluations + required to calculate the finite difference derivative. + + Function handle. + Point at which to compute the derivative. + Derivative order. + Current function value at center. + Function derivative at x of the specified order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Input function handle. + Derivative order. + Function handle that evaluates the derivative of input function at a fixed order. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Function partial derivative at x of the specified order. + + + + Evaluates the partial derivatives of a multivariate function array. + + + This function assumes the input vector x is of the correct length for f. + + Multivariate vector function array handle. + Vector at which to evaluate the derivatives. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Vector of functions partial derivatives at x of the specified order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at a fixed order. + + + + Creates a function handle for the partial derivative of a vector multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at fixed order. + + + + Evaluates the mixed partial derivative of variable order for multivariate functions. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function handle. + Points at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivative at x of the specified order. + + + + Evaluates the mixed partial derivative of variable order for multivariate function arrays. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function array handle. + Vector at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivatives at x of the specified order. + + + + Creates a function handle for the mixed partial derivative of a multivariate function. + + Input function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Creates a function handle for the mixed partial derivative of a multivariate vector function. + + Input vector function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Resets the evaluation counter. + + + + + Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Hessian object with a three point central difference method. + + + + + Creates a numerical Hessian with a specified differentiation scheme. + + Number of points for Hessian evaluation. + Center point for differentiation. + + + + Evaluates the Hessian of the scalar univariate function f at points x. + + Scalar univariate function handle. + Point at which to evaluate Hessian. + Hessian tensor. + + + + Evaluates the Hessian of a multivariate function f at points x. + + + This method of computing the Hessian is only valid for Lipschitz continuous functions. + The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. + + Multivariate function handle.> + Points at which to evaluate Hessian.> + Hessian tensor. + + + + Resets the function evaluation counter for the Hessian. + + + + + Class for evaluating the Jacobian of a function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Jacobian object with a three point central difference method. + + + + + Creates a numerical Jacobian with a specified differentiation scheme. + + Number of points for Jacobian evaluation. + Center point for differentiation. + + + + Evaluates the Jacobian of scalar univariate function f at point x. + + Scalar univariate function handle. + Point at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x. + + + This function assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x given a current function value. + + + To minimize the number of function evaluations, a user can supply the current value of the function + to be used in computing the Jacobian. This value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Current function value at finite difference center. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function array f at vector x. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Jacobian matrix. + + + + Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. + + + To minimize the number of function evaluations, a user can supply a vector of current values of the functions + to be used in computing the Jacobian. These value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Vector of current function values. + Jacobian matrix. + + + + Resets the function evaluation counter for the Jacobian. + + + + + Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Double-Exponential integration. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The number of Gauss-Legendre points. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Gauss-Kronrod integration. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the differintegral of order n at x. + + + + Metrics to measure the distance between two structures. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Pearson's distance, i.e. 1 - the person correlation coefficient. + + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Discrete Univariate Bernoulli distribution. + The Bernoulli distribution is a distribution over bits. The parameter + p specifies the probability that a 1 is generated. + Wikipedia - Bernoulli distribution. + + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + If the Bernoulli parameter is not in the range [0,1]. + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + If the Bernoulli parameter is not in the range [0,1]. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Generates one sample from the Bernoulli distribution. + + The random source to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A random sample from the Bernoulli distribution. + + + + Samples a Bernoulli distributed random variable. + + A sample from the Bernoulli distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Continuous Univariate Beta distribution. + For details about this distribution, see + Wikipedia - Beta distribution. + + + There are a few special cases for the parameterization of the Beta distribution. When both + shape parameters are positive infinity, the Beta distribution degenerates to a point distribution + at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point + distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution + degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the + distribution degenerates to a point distribution at the non-zero shape parameter. + + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + A string representation of the Beta distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. + + + + + Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Beta distribution. + + + + + Gets the variance of the Beta distribution. + + + + + Gets the standard deviation of the Beta distribution. + + + + + Gets the entropy of the Beta distribution. + + + + + Gets the skewness of the Beta distribution. + + + + + Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the Beta distribution. + + + + + Gets the minimum of the Beta distribution. + + + + + Gets the maximum of the Beta distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Beta distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Beta distribution. + + a sequence of samples from the distribution. + + + + Samples Beta distributed random variables by sampling two Gamma variables and normalizing. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a random number from the Beta distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Beta-Binomial distribution. + The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising + when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. + The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. + It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. + Wikipedia - Beta-Binomial distribution. + + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a random number from the BetaBinomial distribution. + + + + Samples a BetaBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of BetaBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a BetaBinomial distributed random variable. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Samples an array of BetaBinomial distributed random variables. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + + + + Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast + is used to construct an underlying beta distribution. + + The minimum value. + The maximum value. + The most likely value (mode). + The random number generator which is used to draw random samples. + The Beta distribution derived from the PERT parameters. + + + + A string representation of the distribution. + + A string representation of the BetaScaled distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. + + + + + Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. + + + + + Gets the location (μ) of the BetaScaled distribution. + + + + + Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the BetaScaled distribution. + + + + + Gets the variance of the BetaScaled distribution. + + + + + Gets the standard deviation of the BetaScaled distribution. + + + + + Gets the entropy of the BetaScaled distribution. + + + + + Gets the skewness of the BetaScaled distribution. + + + + + Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the BetaScaled distribution. + + + + + Gets the minimum of the BetaScaled distribution. + + + + + Gets the maximum of the BetaScaled distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Binomial distribution. + For details about this distribution, see + Wikipedia - Binomial distribution. + + + The distribution is parameterized by a probability (between 0.0 and 1.0). + + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + If is not in the interval [0.0,1.0]. + If is negative. + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The random number generator which is used to draw random samples. + If is not in the interval [0.0,1.0]. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + + + + Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. + + + + + Gets the number of trials. Range: n ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the Binomial distribution without doing parameter checking. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successful trials. + + + + Samples a Binomially distributed random variable. + + The number of successes in N trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Binomially distributed random variables. + + a sequence of successes in N trials. + + + + Samples a binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Samples a binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Gets the scale (a) of the distribution. Range: a > 0. + + + + + Gets the first shape parameter (c) of the distribution. Range: c > 0. + + + + + Gets the second shape parameter (k) of the distribution. Range: k > 0. + + + + + Initializes a new instance of the Burr Type XII class. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Burr distribution. + + + + + Gets the variance of the Burr distribution. + + + + + Gets the standard deviation of the Burr distribution. + + + + + Gets the mode of the Burr distribution. + + + + + Gets the minimum of the Burr distribution. + + + + + Gets the maximum of the Burr distribution. + + + + + Gets the entropy of the Burr distribution (currently not supported). + + + + + Gets the skewness of the Burr distribution. + + + + + Gets the median of the Burr distribution. + + + + + Generates a sample from the Burr distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the Burr distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Discrete Univariate Categorical distribution. + For details about this distribution, see + Wikipedia - Categorical distribution. This + distribution is sometimes called the Discrete distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + Support: 0..k where k = length(probability mass array)-1 + + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class from a . The distribution + will not be automatically updated when the histogram changes. The categorical distribution will have + one value for each bucket and a probability for that value proportional to the bucket count. + + The histogram from which to create the categorical variable. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Gets the probability mass vector (non-negative ratios) of the multinomial. + + Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a . + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets he mode of the distribution. + + Throws a . + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the cumulative distribution function. This method performs no parameter checking. + If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + An array representing the unnormalized cumulative distribution function. + + + + Returns one trials from the categorical distribution. + + The random number generator to use. + The (unnormalized) cumulative distribution of the probability distribution. + One sample from the categorical distribution implied by . + + + + Samples a Binomially distributed random variable. + + The number of successful trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of successful trial counts. + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Continuous Univariate Cauchy distribution. + The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see + Wikipedia - Cauchy distribution. + + + + + Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 + + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Gets the location (x0) of the distribution. + + + + + Gets the scale (γ) of the distribution. Range: γ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi distribution. + This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal + components are independent and each follow a standard normal distribution. The length of the vector will + then have a chi distribution. + Wikipedia - Chi distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Chi distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Chi distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi-Squared distribution. + This distribution is a sum of the squares of k independent standard normal random variables. + Wikipedia - ChiSquare distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ChiSquare distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ChiSquare distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + Generates a sample from the ChiSquare distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sample from the ChiSquare distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Continuous Univariate Uniform distribution. + The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see + Wikipedia - Continuous uniform distribution. + + + + + Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. + + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + If the upper bound is smaller than the lower bound. + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + If the upper bound is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Generates a sample from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Discrete Univariate Conway-Maxwell-Poisson distribution. + The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli + distributions. It is parameterized by two real numbers "lambda" and "nu". For + + nu = 0 the distribution reverts to a Geometric distribution + nu = 1 the distribution reverts to the Poisson distribution + nu -> infinity the distribution converges to a Bernoulli distribution + + This implementation will cache the value of the normalization constant. + Wikipedia - ConwayMaxwellPoisson distribution. + + + + + The mean of the distribution. + + + + + The variance of the distribution. + + + + + Caches the value of the normalization constant. + + + + + Since many properties of the distribution can only be computed approximately, the tolerance + level specifies how much error we accept. + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Gets the lambda (λ) parameter. Range: λ > 0. + + + + + Gets the rate of decay (ν) parameter. Range: ν ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the cumulative distribution at location . + + + + + Gets the normalization constant of the Conway-Maxwell-Poisson distribution. + + + + + Computes an approximate normalization constant for the CMP distribution. + + The lambda (λ) parameter for the CMP distribution. + The rate of decay (ν) parameter for the CMP distribution. + + an approximate normalization constant for the CMP distribution. + + + + + Returns one trials from the distribution. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The z parameter. + + One sample from the distribution implied by , , and . + + + + + Samples a Conway-Maxwell-Poisson distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. + + + a sequence of samples from a Conway-Maxwell-Poisson distribution. + + + + + Samples a random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Multivariate Dirichlet distribution. For details about this distribution, see + Wikipedia - Dirichlet distribution. + + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + No parameter can be less than zero and at least one parameter should be larger than zero. + + The parameters of the Dirichlet distribution. + + + + Gets or sets the parameters of the Dirichlet distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the dimension of the Dirichlet distribution. + + + + + Gets the sum of the Dirichlet parameters. + + + + + Gets the mean of the Dirichlet distribution. + + + + + Gets the variance of the Dirichlet distribution. + + + + + Gets the entropy of the distribution. + + + + + Computes the density of the distribution. + + The locations at which to compute the density. + the density at . + The Dirichlet distribution requires that the sum of the components of x equals 1. + You can also leave out the last component, and it will be computed from the others. + + + + Computes the log density of the distribution. + + The locations at which to compute the density. + the density at . + + + + Samples a Dirichlet distributed random vector. + + A sample from this distribution. + + + + Samples a Dirichlet distributed random vector. + + The random number generator to use. + The Dirichlet distribution parameter. + a sample from the distribution. + + + + Discrete Univariate Uniform distribution. + The discrete uniform distribution is a distribution over integers. The distribution + is parameterized by a lower and upper bound (both inclusive). + Wikipedia - Discrete uniform distribution. + + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Gets the inclusive lower bound of the probability distribution. + + + + + Gets the inclusive upper bound of the probability distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. + + The random source to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A random sample from the discrete uniform distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of uniformly distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a uniformly distributed random variable. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Samples a uniformly distributed random variable. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Continuous Univariate Erlang distribution. + This distribution is a continuous probability distribution with wide applicability primarily due to its + relation to the exponential and Gamma distributions. + Wikipedia - Erlang distribution. + + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Erlang distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The scale (μ) of the Erlang distribution. Range: μ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. + + + + + Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + + Gets the scale of the Erlang distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum value. + + + + + Gets the Maximum value. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Erlang distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Erlang distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Exponential distribution. + The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. + Wikipedia - exponential distribution. + + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Exponential distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Draws a random sample from the distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. + For details about this distribution, see + Wikipedia - FisherSnedecor distribution. + + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. + + + + + Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the FisherSnedecor distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the FisherSnedecor distribution. + + a sequence of samples from the distribution. + + + + Generates one sample from the FisherSnedecor distribution without parameter checking. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a FisherSnedecor distributed random number. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Gamma distribution. + For details about this distribution, see + Wikipedia - Gamma distribution. + + + The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want + to specify a Gamma distribution which is a point distribution we set the shape parameter to be the + location of the point distribution and the inverse scale as positive infinity. The distribution + with shape and inverse scale both zero is undefined. + + Random number generation for the Gamma distribution is based on the algorithm in: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Gamma distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Gamma distribution. Range: k ≥ 0. + The scale (θ) of the Gamma distribution. Range: θ ≥ 0 + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. + + + + + Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + + Gets or sets the scale (θ) of the Gamma distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Gamma distribution. + + + + + Gets the variance of the Gamma distribution. + + + + + Gets the standard deviation of the Gamma distribution. + + + + + Gets the entropy of the Gamma distribution. + + + + + Gets the skewness of the Gamma distribution. + + + + + Gets the mode of the Gamma distribution. + + + + + Gets the median of the Gamma distribution. + + + + + Gets the minimum of the Gamma distribution. + + + + + Gets the maximum of the Gamma distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Gamma distribution. + + a sequence of samples from the distribution. + + + + Sampling implementation based on: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + This method performs no parameter checks. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + A sample from a Gamma distributed random variable. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Geometric distribution. + The Geometric distribution is a distribution over positive integers parameterized by one positive real number. + This implementation of the Geometric distribution will never generate 0's. + Wikipedia - geometric distribution. + + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a not supported exception. + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Returns one sample from the distribution. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + One sample from the distribution implied by . + + + + Samples a Geometric distributed random variable. + + A sample from the Geometric distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Geometric distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Discrete Univariate Hypergeometric distribution. + This distribution is a discrete probability distribution that describes the number of successes in a sequence + of n draws from a finite population without replacement, just as the binomial distribution + describes the number of successes for draws with replacement + Wikipedia - Hypergeometric distribution. + + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the size of the population (N). + + + + + Gets the number of draws without replacement (n). + + + + + Gets the number successes within the population (K, M). + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the cumulative distribution at location . + + + + + Generates a sample from the Hypergeometric distribution without doing parameter checking. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The n parameter of the distribution. + a random number from the Hypergeometric distribution. + + + + Samples a Hypergeometric distributed random variable. + + The number of successes in n trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Hypergeometric distributed random variables. + + a sequence of successes in n trials. + + + + Samples a random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Continuous Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by a double. + + + + + Gets the largest element in the domain of the distribution which can be represented by a double. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Discrete Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by an integer. + + + + + Gets the largest element in the domain of the distribution which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Probability Distribution. + + + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Continuous Univariate Inverse Gamma distribution. + The inverse Gamma distribution is a distribution over the positive real numbers parameterized by + two positive parameters. + Wikipedia - InverseGamma distribution. + + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Gets or sets the shape (α) parameter. Range: α > 0. + + + + + Gets or sets The scale (β) parameter. Range: β > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Throws . + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Gets the mean (μ) of the distribution. Range: μ > 0. + + + + + Gets the shape (λ) of the distribution. Range: λ > 0. + + + + + Initializes a new instance of the InverseGaussian class. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Inverse Gaussian distribution. + + + + + Gets the variance of the Inverse Gaussian distribution. + + + + + Gets the standard deviation of the Inverse Gaussian distribution. + + + + + Gets the median of the Inverse Gaussian distribution. + No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. + + + + + Gets the minimum of the Inverse Gaussian distribution. + + + + + Gets the maximum of the Inverse Gaussian distribution. + + + + + Gets the skewness of the Inverse Gaussian distribution. + + + + + Gets the kurtosis of the Inverse Gaussian distribution. + + + + + Gets the mode of the Inverse Gaussian distribution. + + + + + Gets the entropy of the Inverse Gaussian distribution (currently not supported). + + + + + Generates a sample from the inverse Gaussian distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the inverse Gaussian distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the inverse Gaussian distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + An Inverse Gaussian distribution. + + + + Multivariate Inverse Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution + is the conjugate prior for the covariance matrix of a multivariate normal distribution. + Wikipedia - Inverse-Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. + + + + + Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. + + + + Gets the variance of the distribution. + + The variance of the distribution. + Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. + + + + Evaluates the probability density function for the inverse Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + a sample from the distribution. + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + The random number generator to use. + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + a sample from the distribution. + + + + Univariate Probability Distribution. + + + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Continuous Univariate Laplace distribution. + The Laplace distribution is a distribution over the real numbers parameterized by a mean and + scale parameter. The PDF is: + p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. + Wikipedia - Laplace distribution. + + + + + Initializes a new instance of the class (location = 0, scale = 1). + + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + If is negative. + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + + + + Gets the location (μ) of the Laplace distribution. + + + + + Gets the scale (b) of the Laplace distribution. Range: b > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples a Laplace distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sample from the Laplace distribution. + + a sample from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Log-Normal distribution. + For details about this distribution, see + Wikipedia - Log-Normal distribution. + + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the logarithm of the distribution. + The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a log-normal distribution with the desired mu and sigma parameters. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Constructs a log-normal distribution with the desired mean and variance. + + The mean of the log-normal distribution. + The variance of the log-normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Estimates the log-normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + MATLAB: lognfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + + + + Gets the log-scale (μ) (mean of the logarithm) of the distribution. + + + + + Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mu of the log-normal distribution. + + + + + Gets the variance of the log-normal distribution. + + + + + Gets the standard deviation of the log-normal distribution. + + + + + Gets the entropy of the log-normal distribution. + + + + + Gets the skewness of the log-normal distribution. + + + + + Gets the mode of the log-normal distribution. + + + + + Gets the median of the log-normal distribution. + + + + + Gets the minimum of the log-normal distribution. + + + + + Gets the maximum of the log-normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the density at . + + MATLAB: lognpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: logncdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: logninv + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Multivariate Matrix-valued Normal distributions. The distribution + is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix + for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. + Wikipedia - MatrixNormal distribution. + + + + + The mean of the matrix normal distribution. + + + + + The covariance matrix for the rows. + + + + + The covariance matrix for the columns. + + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + The random number generator which is used to draw random samples. + If the dimensions of the mean and two covariance matrices don't match. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + + + + Gets the mean. (M) + + The mean of the distribution. + + + + Gets the row covariance. (V) + + The row covariance. + + + + Gets the column covariance. (K) + + The column covariance. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Evaluates the probability density function for the matrix normal distribution. + + The matrix at which to evaluate the density at. + the density at + If the argument does not have the correct dimensions. + + + + Samples a matrix normal distributed random variable. + + A random number from this distribution. + + + + Samples a matrix normal distributed random variable. + + The random number generator to use. + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + a sequence of samples from the distribution. + + + + Samples a vector normal distributed random variable. + + The random number generator to use. + The mean of the vector normal distribution. + The covariance matrix of the vector normal distribution. + a sequence of samples from defined distribution. + + + + Multivariate Multinomial distribution. For details about this distribution, see + Wikipedia - Multinomial distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + + + Stores the normalized multinomial probabilities. + + + + + The number of trials. + + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class from histogram . The distribution will + not be automatically updated when the histogram changes. + + Histogram instance + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative returns false, + if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. + + + + Gets the proportion of ratios. + + + + + Gets the number of trials. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Computes values of the probability mass function. + + Non-negative integers x1, ..., xk + The probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Computes values of the log probability mass function. + + Non-negative integers x1, ..., xk + The log probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Samples one multinomial distributed random variable. + + the counts for each of the different possible values. + + + + Samples a sequence multinomially distributed random variables. + + a sequence of counts for each of the different possible values. + + + + Samples one multinomial distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + the counts for each of the different possible values. + + + + Samples a multinomially distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of variables needed. + a sequence of counts for each of the different possible values. + + + + Discrete Univariate Negative Binomial distribution. + The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special + case that r is an integer one can interpret the distribution as the number of failures before the r'th success + when the probability of success is p. + Wikipedia - NegativeBinomial distribution. + + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Gets the number of successes. Range: r ≥ 0. + + + + + Gets the probability of success. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Samples a negative binomial distributed random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + a sample from the distribution. + + + + Samples a NegativeBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of NegativeBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Continuous Univariate Normal distribution, also known as Gaussian distribution. + For details about this distribution, see + Wikipedia - Normal distribution. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a normal distribution from a mean and standard deviation. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + a normal distribution. + + + + Constructs a normal distribution from a mean and variance. + + The mean (μ) of the normal distribution. + The variance (σ^2) of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Constructs a normal distribution from a mean and precision. + + The mean (μ) of the normal distribution. + The precision of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Estimates the normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + MATLAB: normfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Gets the mean (μ) of the normal distribution. + + + + + Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + + Gets the variance of the normal distribution. + + + + + Gets the precision of the normal distribution. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the entropy of the normal distribution. + + + + + Gets the skewness of the normal distribution. + + + + + Gets the mode of the normal distribution. + + + + + Gets the median of the normal distribution. + + + + + Gets the minimum of the normal distribution. + + + + + Gets the maximum of the normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the density at . + + MATLAB: normpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: normcdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: norminv + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + This structure represents the type over which the distribution + is defined. + + + + + Initializes a new instance of the struct. + + The mean of the pair. + The precision of the pair. + + + + Gets or sets the mean of the pair. + + + + + Gets or sets the precision of the pair. + + + + + Multivariate Normal-Gamma Distribution. + The distribution is the conjugate prior distribution for the + distribution. It specifies a prior over the mean and precision of the distribution. + It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the + precision inverse scale. + The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). + The following degenerate cases are special: when the precision is known, + the precision shape will encode the value of the precision while the precision inverse scale is positive + infinity. When the mean is known, the mean location will encode the value of the mean while the scale + will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. + Wikipedia - Normal-Gamma distribution. + + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Gets the location of the mean. + + + + + Gets the scale of the mean. + + + + + Gets the shape of the precision. + + + + + Gets the inverse scale of the precision. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Returns the marginal distribution for the mean of the NormalGamma distribution. + + the marginal distribution for the mean of the NormalGamma distribution. + + + + Returns the marginal distribution for the precision of the distribution. + + The marginal distribution for the precision of the distribution/ + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the variance of the distribution. + + The mean of the distribution. + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + Density value + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + Density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + The log of the density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + The log of the density value + + + + Generates a sample from the NormalGamma distribution. + + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + a sequence of samples from the distribution. + + + + Generates a sample from the NormalGamma distribution. + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sequence of samples from the distribution. + + + + Continuous Univariate Pareto distribution. + The Pareto distribution is a power law probability distribution that coincides with social, + scientific, geophysical, actuarial, and many other types of observable phenomena. + For details about this distribution, see + Wikipedia - Pareto distribution. + + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + If or are negative. + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The random number generator which is used to draw random samples. + If or are negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Pareto distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Poisson distribution. + + + Distribution is described at Wikipedia - Poisson distribution. + Knuth's method is used to generate Poisson distributed random variables. + f(x) = exp(-λ)*λ^x/x!; + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + If is equal or less then 0.0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + If is equal or less then 0.0. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + + + + Gets the Poisson distribution parameter λ. Range: λ > 0. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Generates one sample from the Poisson distribution. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by Knuth's method. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by "Rejection method PA". + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, + Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) + The article is on pages 29-35. The algorithm given here is on page 32. + + + + Samples a Poisson distributed random variable. + + A sample from the Poisson distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Poisson distributed random variables. + + a sequence of successes in N trials. + + + + Samples a Poisson distributed random variable. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Samples a Poisson distributed random variable. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Rayleigh distribution. + The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an + example of how it arises, the wind speed will have a Rayleigh distribution if the components of + the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. + For details about this distribution, see + Wikipedia - Rayleigh distribution. + + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + If is negative. + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the scale (σ) of the distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Rayleigh distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (σ) of the distribution. Range: σ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (σ) of the distribution. Range: σ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized Error Distribution (SGED). + Implements the univariate SSkewed Generalized Error Distribution. For details about this + distribution, see + + Wikipedia - Generalized Error Distribution. + It includes Laplace, Normal and Student-t distributions. + This is the distribution with q=Inf. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution + with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Generates a sample from the Skew Generalized Error distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized Error distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized T-distribution. + Implements the univariate Skewed Generalized t-distribution. For details about this + distribution, see + + Wikipedia - Skewed generalized t-distribution. + The skewed generalized t-distribution contains many different distributions within it + as special cases based on the parameterization chosen. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution + with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Given a parameter set, returns the distribution that matches this parameterization. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + Null if no known distribution matches the parameterization, else the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the inverse cumulative density at . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Skew Generalized t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Stable distribution. + A random variable is said to be stable (or to have a stable distribution) if it has + the property that a linear combination of two independent copies of the variable has + the same distribution, up to location and scale parameters. + For details about this distribution, see + Wikipedia - Stable distribution. + + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. + + + + + Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + + + + + Gets the scale (c) of the distribution. Range: c > 0. + + + + + Gets the location (μ) of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets he entropy of the distribution. + + Always throws a not supported exception. + + + + Gets the skewness of the distribution. + + Throws a not supported exception of Alpha != 2. + + + + Gets the mode of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the median of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) + + + + Samples the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a random number from the distribution. + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Stable distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Continuous Univariate Student's T-distribution. + Implements the univariate Student t-distribution. For details about this + distribution, see + + Wikipedia - Student's t-distribution. + + We use a slightly generalized version (compared to + Wikipedia) of the Student t-distribution. Namely, one which also + parameterizes the location and scale. See the book "Bayesian Data + Analysis" by Gelman et al. for more details. + The density of the Student t-distribution p(x|mu,scale,dof) = + Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / + (Gamma(dof/2)*Sqrt(dof*pi*scale)). + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. This might involve heavy + computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 + scale 1.0 and degrees of freedom 1. + + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Gets the location (μ) of the Student t-distribution. + + + + + Gets the scale (σ) of the Student t-distribution. Range: σ > 0. + + + + + Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Student t-distribution. + + + + + Gets the variance of the Student t-distribution. + + + + + Gets the standard deviation of the Student t-distribution. + + + + + Gets the entropy of the Student t-distribution. + + + + + Gets the skewness of the Student t-distribution. + + + + + Gets the mode of the Student t-distribution. + + + + + Gets the median of the Student t-distribution. + + + + + Gets the minimum of the Student t-distribution. + + + + + Gets the maximum of the Student t-distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Samples student-t distributed random variables. + + The algorithm is method 2 in section 5, chapter 9 + in L. Devroye's "Non-Uniform Random Variate Generation" + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a random number from the standard student-t distribution. + + + + Generates a sample from the Student t-distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Student t-distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Student t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Student t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Triangular distribution. + For details, see Wikipedia - Triangular distribution. + + The distribution will use the by default. + Users can get/set the random number generator by using the property. + The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The random number generator which is used to draw random samples. + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets or sets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Triangular distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Generates a sample from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Initializes a new instance of the TruncatedPareto class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The random number generator which is used to draw random samples. + If or are non-positive or if T ≤ xm. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets the truncation (T) of the distribution. Range: T > 0. + + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Gets the mean of the truncated Pareto distribution. + + + + + Gets the variance of the truncated Pareto distribution. + + + + + Gets the standard deviation of the truncated Pareto distribution. + + + + + Gets the mode of the truncated Pareto distribution (not supported). + + + + + Gets the minimum of the truncated Pareto distribution. + + + + + Gets the maximum of the truncated Pareto distribution. + + + + + Gets the entropy of the truncated Pareto distribution (not supported). + + + + + Gets the skewness of the truncated Pareto distribution. + + + + + Gets the median of the truncated Pareto distribution. + + + + + Generates a sample from the truncated Pareto distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Continuous Univariate Weibull distribution. + For details about this distribution, see + Wikipedia - Weibull distribution. + + + The Weibull distribution is parametrized by a shape and scale parameter. + + + + + Reusable intermediate result 1 / (_scale ^ _shape) + + + By caching this parameter we can get slightly better numerics precision + in certain constellations without any additional computations. + + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Gets the shape (k) of the Weibull distribution. Range: k > 0. + + + + + Gets the scale (λ) of the Weibull distribution. Range: λ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Weibull distribution. + + + + + Gets the variance of the Weibull distribution. + + + + + Gets the standard deviation of the Weibull distribution. + + + + + Gets the entropy of the Weibull distribution. + + + + + Gets the skewness of the Weibull distribution. + + + + + Gets the mode of the Weibull distribution. + + + + + Gets the median of the Weibull distribution. + + + + + Gets the minimum of the Weibull distribution. + + + + + Gets the maximum of the Weibull distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Generates a sample from the Weibull distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Weibull distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos + + + + Returns a Weibull distribution. + + + + Generates a sample from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Multivariate Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution + is the conjugate prior for the precision (inverse covariance) matrix of the multivariate + normal distribution. + Wikipedia - Wishart distribution. + + + + + The degrees of freedom for the Wishart distribution. + + + + + The scale matrix for the Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The random number generator which is used to draw random samples. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Gets or sets the degrees of freedom (n) for the Wishart distribution. + + + + + Gets or sets the scale matrix (V) for the Wishart distribution. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + + + + Gets the variance of the distribution. + + The variance of the distribution. + + + + Evaluates the probability density function for the Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + A random number from this distribution. + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The cholesky decomposition to use. + a random number from the distribution. + + + + Discrete Univariate Zipf distribution. + Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact + that many types of data studied in the physical and social sciences can be approximated with + a Zipfian distribution, one of a family of related discrete power law probability distributions. + For details about this distribution, see + Wikipedia - Zipf distribution. + + + + + The s parameter of the distribution. + + + + + The n parameter of the distribution. + + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Gets or sets the s parameter of the distribution. + + + + + Gets or sets the n parameter of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The s parameter of the distribution. + The n parameter of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the Zipf distribution without doing parameter checking. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + a random number from the Zipf distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of zipf distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Integer number theory functions. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Find out whether the provided 32 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 64 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 32 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 64 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 32 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 64 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Raises 2 to the provided integer exponent (0 <= exponent < 31). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Raises 2 to the provided integer exponent (0 <= exponent < 63). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Evaluate the binary logarithm of an integer number. + + Two-step method using a De Bruijn-like sequence table lookup. + + + + Find the closest perfect power of two that is larger or equal to the provided + 32 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Find the closest perfect power of two that is larger or equal to the provided + 64 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's + algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the greatest common divisor (gcd) of two big integers. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two big integers. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Collection of functions equivalent to those provided by Microsoft Excel + but backed instead by Math.NET Numerics. + We do not recommend to use them except in an intermediate phase when + porting over solutions previously implemented in Excel. + + + + + An algorithm failed to converge. + + + + + An algorithm failed to converge due to a numerical breakdown. + + + + + An error occurred calling native provider function. + + + + + An error occurred calling native provider function. + + + + + Native provider was unable to allocate sufficient memory. + + + + + Native provider failed LU inversion do to a singular U matrix. + + + + + Compound Monthly Return or Geometric Return or Annualized Return + + + + + Average Gain or Gain Mean + This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) + and then dividing the total by the number of gain periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Average Loss or LossMean + This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) + and then dividing the total by the number of loss periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain + and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. + © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. + + + + + Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then + measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. + + http://www.offshore-library.com/kb/statistics.php + + + + This measure is similar to the loss standard deviation except the downside deviation + considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. + For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below + 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for + the losing periods, and then measure the variation between each losing return and the losing return average). + + + + + A measure of volatility in returns below the mean. It's similar to standard deviation, but it only + looks at periods where the investment return was less than average return. + + + + + Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing + period. Periods can be monthly or quarterly depending on the data frequency. + + + + + Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + The missing gradient is evaluated numerically (forward difference). + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + + Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" + The roots of the polynomial + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The polynomial. + The roots of the polynomial + + + + Find all roots of the Chebychev polynomial of the first kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) + + + + Find all roots of the Chebychev polynomial of the second kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) + + + + Least-Squares Curve Fitting Routines + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as [a, b] array, + where a is the intercept and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning its best fitting parameters as (a, r) tuple. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning a function y' for the best fitting polynomial. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning a function y' for the best fitting combination. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning its best fitting parameter p. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning its best fitting parameter p0 and p1. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning its best fitting parameter p0, p1 and p2. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning a function y' for the best fitting curve. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate a linearly spaced sample vector of the given length between the specified values (inclusive). + Equivalent to MATLAB linspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). + + + + + Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). + Equivalent to MATLAB logspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + + + + + Create a periodic wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic wave. + + The number of samples to generate. + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a Sine wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite Sine wave sequence. + + Samples per unit. + Frequency in samples per unit. + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic square wave, starting with the high phase. + + The number of samples to generate. + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create an infinite periodic square wave sequence, starting with the high phase. + + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create a periodic triangle wave, starting with the raise phase from the lowest sample. + + The number of samples to generate. + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. + + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create a periodic sawtooth wave, starting with the lowest sample. + + The number of samples to generate. + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. + + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an array with each field set to the same value. + + The number of samples to generate. + The value that each field should be set to. + + + + Create an infinite sequence where each element has the same value. + + The value that each element should be set to. + + + + Create a Heaviside Step sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. + + + + Create an infinite Heaviside Step sample sequence. + + The maximal reached peak. + Offset to the time axis. + + + + Create a Kronecker Delta impulse sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + The maximal reached peak. + Offset to the time axis, hence the sample index of the impulse. + + + + Create a periodic Kronecker Delta impulse sample vector. + + The number of samples to generate. + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Generate samples generated by the given computation. + + + + + Generate an infinite sequence generated by the given computation. + + + + + Generate a Fibonacci sequence, including zero as first value. + + + + + Generate an infinite Fibonacci sequence, including zero as first value. + + + + + Create random samples, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create an infinite random sample sequence, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create samples with independent amplitudes of standard distribution. + + + + + Create an infinite sample sequence with independent amplitudes of standard distribution. + + + + + Create samples with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Generate samples by sampling a function at samples from a probability distribution. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution. + + + + + Globalized String Handling Helpers + + + + + Tries to get a from the format provider, + returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format + provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Globalized Parsing: Tokenize a node by splitting it into several nodes. + + Node that contains the trimmed string to be tokenized. + List of keywords to tokenize by. + keywords to skip looking for (because they've already been handled). + + + + Globalized Parsing: Parse a double number + + First token of the number. + Culture Info. + The parsed double number using the given culture information. + + + + + Globalized Parsing: Parse a float number + + First token of the number. + Culture Info. + The parsed float number using the given culture information. + + + + + Calculates r^2, the square of the sample correlation coefficient between + the observed outcomes and the observed predictor values. + Not to be confused with R^2, the coefficient of determination, see . + + The modelled/predicted values + The observed/actual values + Squared Person product-momentum correlation coefficient. + + + + Calculates r, the sample correlation coefficient between the observed outcomes + and the observed predictor values. + + The modelled/predicted values + The observed/actual values + Person product-momentum correlation coefficient. + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The Standard Error of the regression + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The degrees of freedom by which the + number of samples is reduced for performing the Standard Error calculation + The Standard Error of the regression + + + + Calculates the R-Squared value, also known as coefficient of determination, + given some modelled and observed values. + + The values expected from the model. + The actual values obtained. + Coefficient of determination. + + + + Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). + + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed from the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Generate the frequencies corresponding to each index in frequency space. + The frequency space has a resolution of sampleRate/N. + Index 0 corresponds to the DC part, the following indices correspond to + the positive frequencies up to the Nyquist frequency (sampleRate/2), + followed by the negative frequencies wrapped around. + + Number of samples. + The sampling rate of the time-space data. + + + + Fourier Transform Convention + + + + + Inverse integrand exponent (forward: positive sign; inverse: negative sign). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling and common exponent (used in Maple). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] + + + + + Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] + + + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + + + Naive forward DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Hartley Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DHT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Hartley Transform Convention Options. + Corresponding time-space vector. + + + + Rescale FFT-the resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Rescale the iFFT-resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Naive generic DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Corresponding frequency-space vector. + + + + Hartley Transform Convention + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling. + + + + + Numerical Integration (Quadrature). + + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Numerical Contour Integration of a complex-valued function over a real variable,. + + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Analytic integration algorithm for smooth functions with no discontinuities + or derivative discontinuities and no poles inside the interval. + + + + + Maximum number of iterations, until the asked + maximum error is (likely to be) satisfied. + + + + + Approximate the integral by the double exponential transformation + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximate the integral by the double exponential transformation + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Compute the abscissa vector for a single level. + + The level to evaluate the abscissa vector for. + Abscissa Vector. + + + + Compute the weight vector for a single level. + + The level to evaluate the weight vector for. + Weight Vector. + + + + Precomputed abscissa vector per level. + + + + + Precomputed weight vector per level. + + + + + Getter for the order. + + + + + Getter that returns a clone of the array containing the Kronrod abscissas. + + + + + Getter that returns a clone of the array containing the Kronrod weights. + + + + + Getter that returns a clone of the array containing the Gauss weights. + + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth function to integrate + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth complex function to integrate, defined on the real axis. + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + + Initializes a new instance of the class. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + Gettter for the ith abscissa. + + Index of the ith abscissa. + The ith abscissa. + + + + Getter that returns a clone of the array containing the abscissas. + + + + + Getter for the ith weight. + + Index of the ith weight. + The ith weight. + + + + Getter that returns a clone of the array containing the weights. + + + + + Getter for the order. + + + + + Getter for the InvervalBegin. + + + + + Getter for the InvervalEnd. + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth function to integrate. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + Contains a method to compute the Gauss-Kronrod abscissas/weights. + + + + + Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + + + Computes the Gauss-Kronrod abscissas/weights and Gauss weights. + + Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. + Object containing the non-negative abscissas/weights, order. + + + + Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. + + + + + Return value and derivative of a Legendre series at given points. + + + + + Return value and derivative of a Legendre polynomial of order at given points. + + + + + Creates a Gauss-Kronrod point. + + + + + Getter for the GaussKronrodPoint. + + Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, and order. + + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Computes the Gauss-Legendre abscissas/weights. + See Pavel Holoborodko for a description of the algorithm. + + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. 1e-10 is usually fine. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Creates and maps a Gauss-Legendre point. + + + + + Getter for the GaussPoint. + + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Getter for the GaussPoint. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Contains the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + + Contains two GaussPoint. + + + + + Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. + + + Wikipedia - Trapezium Rule + + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, define don real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation algorithm for definite integrals by Simpson's rule. + + + + + Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Even number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Interpolation Factory. + + + + + Creates an interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Bulirsch Stoer rational interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a barycentric polynomial interpolation where the given sample points are equidistant. + + The sample points t, must be equidistant. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted + instead, which is more efficient. + + + + + Create a Neville polynomial interpolation based on arbitrary points. + If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. + Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a piecewise linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted + instead, which is more efficient. + + + + + Create piecewise log-linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted + instead, which is more efficient. + + + + + Create an piecewise natural cubic spline interpolation based on arbitrary points, + with zero secondary derivatives at the boundaries. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted + instead, which is more efficient. + + + + + Create an piecewise cubic Akima spline interpolation based on arbitrary points. + Akima splines are robust to outliers. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted + instead, which is more efficient. + + + + + Create a piecewise cubic Hermite spline interpolation based on arbitrary points + and their slopes/first derivative. + + The sample points t. + The sample point values x(t). + The slope at the sample points. Optimized for arrays. + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted + instead, which is more efficient. + + + + + Create a step-interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Barycentric Interpolation Algorithm. + + Supports neither differentiation nor integration. + + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + Barycentric weights (N), sorted ascendingly by x. + + + + Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. + + + + + Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. + + + + + Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. + + + + This algorithm supports neither differentiation nor integration. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Cubic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + third order spline coefficients (N) + + + + Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + + + + + Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + Akima splines are robust to outliers. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + + + + + Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, + and custom boundary/termination conditions. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + + + + + Create a natural cubic spline interpolation from a set of (x,y) value pairs + and zero second derivatives at the two boundaries, sorted ascendingly by x. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + + + + + Three-Point Differentiation Helper. + + Sample Points t. + Sample Values x(t). + Index of the point of the differentiation. + Index of the first sample. + Index of the second sample. + Index of the third sample. + The derivative approximation. + + + + Tridiagonal Solve Helper. + + The a-vector[n]. + The b-vector[n], will be modified by this function. + The c-vector[n]. + The d-vector[n], will be modified by this function. + The x-vector[n] + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Interpolation within the range of a discrete set of known data points. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Piece-wise Linear Interpolation. + + Supports both differentiation and integration. + + + Sample points (N+1), sorted ascending + Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients + Slopes (N) at the sample points (first order coefficients): N + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Piece-wise Log-Linear Interpolation + + This algorithm supports differentiation, not integration. + + + + Internal Spline Interpolation + + + + Sample points (N), sorted ascending + Natural logarithm of the sample values (N) at the corresponding points + + + + Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Lagrange Polynomial Interpolation using Neville's Algorithm. + + + + This algorithm supports differentiation, but doesn't support integration. + + + When working with equidistant or Chebyshev sample points it is + recommended to use the barycentric algorithms specialized for + these cases instead of this arbitrary Neville algorithm. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Quadratic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Left and right boundary conditions. + + + + + Natural Boundary (Zero second derivative). + + + + + Parabolically Terminated boundary. + + + + + Fixed first derivative at the boundary. + + + + + Fixed second derivative at the boundary. + + + + + A step function where the start of each segment is included, and the last segment is open-ended. + Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. + The domain of the function is all real numbers, such that y = 0 where x <. + + Supports both differentiation and integration. + + + Sample points (N), sorted ascending + Samples values (N) of each segment starting at the corresponding sample point. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t. + + + + + Wraps an interpolation with a transformation of the interpolated values. + + Neither differentiation nor integration is supported. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use, + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a double dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + Matrix V is encoded in the property EigenVectors in the way that: + - column corresponding to real eigenvalue represents real eigenvector, + - columns corresponding to the pair of complex conjugate eigenvalues + lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + double version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + double version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiply this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply this one by. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a float dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + float version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a float sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + float version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the Frobenius norm of this matrix. + The Frobenius norm of this matrix. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex value z1 + Complex value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex32 value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex32 value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex32 value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex32 dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex32 value z1 + Complex32 value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + Supported data types are double, single, , and . + + + + Gets the lower triangular form of the Cholesky matrix. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + Supported data types are double, single, , and . + + + + Gets or sets a value indicating whether matrix is symmetric or not + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Gets or sets the eigen values (λ) of matrix in ascending value. + + + + + Gets or sets eigenvectors. + + + + + Gets or sets the block diagonal eigenvalue matrix. + + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + Supported data types are double, single, , and . + + + + Classes that solves a system of linear equations, AX = B. + + Supported data types are double, single, , and . + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, Ax = b + + The right hand side vector, b. + The left hand side Vector, x. + + + + Solves a system of linear equations, Ax = b. + + The right hand side vector, b. + The left hand side Matrix>, x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + Supported data types are double, single, , and . + + + + Gets the lower triangular factor. + + + + + Gets the upper triangular factor. + + + + + Gets the permutation applied to LU factorization. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + The type of QR factorization go perform. + + + + + Compute the full QR factorization of a matrix. + + + + + Compute the thin QR factorization of a matrix. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + Supported data types are double, single, , and . + + + + Gets or sets orthogonal Q matrix + + + + + Gets the upper triangular factor R. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + Supported data types are double, single, , and . + + + Indicating whether U and VT matrices have been computed during SVD factorization. + + + + Gets the singular values (Σ) of matrix in ascending value. + + + + + Gets the left singular vectors (U - m-by-m unitary matrix) + + + + + Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) + + + + + Returns the singular values as a diagonal . + + The singular values as a diagonal . + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + Supported data types are double, single, , and . + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + + + + The value of 1.0. + + + + + The value of 0.0. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar denominator to use. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar numerator to use. + The matrix to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. + + The exponent matrix to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Adds a scalar to each element of the matrix. + + The scalar to add. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds a scalar to each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix. + + The scalar to subtract. + A new matrix containing the subtraction of this matrix and the scalar. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts each element of the matrix from a scalar. + + The scalar to subtract from. + A new matrix containing the subtraction of the scalar and this matrix. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of this matrix with a scalar. + + The scalar to multiply with. + The result of the multiplication. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides each element of this matrix with a scalar. + + The scalar to divide with. + The result of the division. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides a scalar by each element of the matrix. + + The scalar to divide. + The result of the division. + + + + Divides a scalar by each element of the matrix and places results into the result matrix. + + The scalar to divide. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.ColumnCount != rightSide.Count. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.RowCount. + If this.ColumnCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ). + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.Rows. + If the result matrix's dimensions are not the this.Rows x other.Columns. + + + + Multiplies this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.Rows. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with the conjugate transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the conjugate transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Raises this square matrix to a positive integer exponent and places the results into the result matrix. + + The positive integer exponent to raise the matrix to. + The result of the power. + + + + Multiplies this square matrix with another matrix and returns the result. + + The positive integer exponent to raise the matrix to. + + + + Negate each element of this matrix. + + A matrix containing the negated values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + if the result matrix's dimensions are not the same as this matrix. + + + + Complex conjugate each element of this matrix. + + A matrix containing the conjugated values. + + + + Complex conjugate each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + if the result matrix's dimensions are not the same as this matrix. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Pointwise multiplies this matrix with another matrix. + + The matrix to pointwise multiply with this one. + If this matrix and are not the same size. + A new matrix that is the pointwise multiplication of this matrix and . + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise divide this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + A new matrix that is the pointwise division of this matrix and . + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise modulus. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise remainder. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Helper function to apply a unary function to a matrix. The function + f modifies the matrix given to it in place. Before its + called, a copy of the 'this' matrix is first created, then passed to + f. The copy is then returned as the result + + Function which takes a matrix, modifies it in place and returns void + New instance of matrix which is the result + + + + Helper function to apply a unary function which modifies a matrix + in place. + + Function which takes a matrix, modifies it in place and returns void + The matrix to be passed to f and where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two matrices + and modifies the latter in place. A copy of the "this" matrix is + first made and then passed to f together with the other matrix. The + copy is then returned as the result + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The resulting matrix + If this matrix and are not the same dimension. + + + + Helper function to apply a binary function which takes two matrices + and modifies the second one in place + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The matrix to store the result. + The resulting matrix + If this matrix and are not the same dimension. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + + + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + The other matrix 'y' + The matrix with the result and 'x' + + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Calculates the rank of the matrix. + + effective numerical rank, obtained from SVD + + + + Calculates the nullity of the matrix. + + effective numerical nullity, obtained from SVD + + + Calculates the condition number of this matrix. + The condition number of the matrix. + The condition number is calculated using singular value decomposition. + + + Computes the determinant of this matrix. + The determinant of this matrix. + + + + Computes an orthonormal basis for the null space of this matrix, + also known as the kernel of the corresponding matrix transformation. + + + + + Computes an orthonormal basis for the column space of this matrix, + also known as the range or image of the corresponding matrix transformation. + + + + Computes the inverse of this matrix. + The inverse of this matrix. + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. + In a later release, it will be replaced with a sparse implementation. + + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Returns a string that describes the type, dimensions and shape of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes this matrix. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Matrix class. + + + + + Gets the raw matrix data storage. + + + + + Gets the number of columns. + + The number of columns. + + + + Gets the number of rows. + + The number of rows. + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + + + + Sets the value of the given element without range checking. + + + The row of the element. + + + The column of the element. + + + The value to set the element to. + + + + + Sets all values to zero. + + + + + Sets all values of a row to zero. + + + + + Sets all values of a column to zero. + + + + + Sets all values for all of the chosen rows to zero. + + + + + Sets all values for all of the chosen columns to zero. + + + + + Sets all values of a sub-matrix to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Creates a clone of this instance. + + + A clone of the instance. + + + + + Copies the elements of this matrix to the given matrix. + + + The matrix to copy values into. + + + If target is . + + + If this and the target matrix do not have the same dimensions.. + + + + + Copies a row into an Vector. + + The row to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of rows. + + + + Copies a row into to the given Vector. + + The row to copy. + The Vector to copy the row into. + If the result vector is . + If is negative, + or greater than or equal to the number of rows. + If this.Columns != result.Count. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of rows. + is negative, + or greater than or equal to the number of columns. + (columnIndex + length) >= Columns. + If is not positive. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Copies a column into a new Vector>. + + The column to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of columns. + + + + Copies a column into to the given Vector. + + The column to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If this.Rows != result.Count. + + + + Copies the requested column elements into a new Vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of columns. + is negative, + or greater than or equal to the number of rows. + (rowIndex + length) >= Rows. + + If is not positive. + + + + Copies the requested column elements into the given vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Returns the elements of the diagonal in a Vector. + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a new matrix and inserts the given column at the given index. + + The index of where to insert the column. + The column to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of columns. + If the size of != the number of rows. + + + + Creates a new matrix with the given column removed. + + The index of the column to remove. + A new matrix without the chosen column. + If is < zero or >= the number of columns. + + + + Copies the values of the given Vector to the specified column. + + The column to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given Vector to the specified sub-column. + + The column to copy the values to. + The row to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given array to the specified column. + + The column to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + If the size of does not + equal the number of rows of this Matrix. + + + + Creates a new matrix and inserts the given row at the given index. + + The index of where to insert the row. + The row to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of rows. + If the size of != the number of columns. + + + + Creates a new matrix with the given row removed. + + The index of the row to remove. + A new matrix without the chosen row. + If is < zero or >= the number of rows. + + + + Copies the values of the given Vector to the specified row. + + The row to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given Vector to the specified sub-row. + + The row to copy the values to. + The column to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given array to the specified row. + + The row to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The column to start copying to. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The number of rows to copy. Must be positive. + The column to start copying to. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The row of the sub-matrix to start copying from. + The number of rows to copy. Must be positive. + The column to start copying to. + The column of the sub-matrix to start copying from. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of the given Vector to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Returns the transpose of this matrix. + + The transpose of this matrix. + + + + Puts the transpose of this matrix into the result matrix. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + + + + Concatenates this matrix with the given matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Concatenates this matrix with the given matrix and places the result into the result matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, + where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. + The values of off the off diagonal matrices/blocks are set to zero. + + The lower, right matrix. + If lower is . + the combined matrix + + + + + + Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. + + The lower, right matrix. + The combined matrix + If lower is . + If the result matrix is . + If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). + + + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Returns this matrix as a multidimensional array. + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + + A multidimensional containing the values of this matrix. + + + + Returns the matrix's elements as an array with the data laid out column by column (column major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the matrix's elements as an array with the data laid row by row (row major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns this matrix as array of row arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns this matrix as array of column arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the matrix will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowArrays instead if you always need an independent array. + + + + + Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnArrays instead if you always need an independent array. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. + + The column to start enumerating over. + The number of columns to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. + + The column to start enumerating over. + The number of columns to enumerating over. + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. + + The row to start enumerating over. + The number of rows to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. + + The row to start enumerating over. + The number of rows to enumerating over. + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Applies a function to each value of this matrix and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value with its result. + The row and column indices of each value (zero-based) are passed as first arguments to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each row. + + + + + For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each column. + + + + + Applies a function f to each row vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Applies a function f to each column vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Reduces all row vectors by applying a function between two of them, until only a single vector is left. + + + + + Reduces all column vectors by applying a function between two of them, until only a single vector is left. + + + + + Applies a function to each value pair of two matrices and replaces the value in the result vector. + + + + + Applies a function to each value pair of two matrices and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two matrices and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two matrices of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to add. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to add. + The right matrix to add. + The result of the addition. + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts a scalar from each element of a matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to subtract. + The scalar value to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts each element of a matrix from a scalar. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Divides a scalar with a matrix. + + The scalar to divide. + The matrix. + The result of the division. + If is . + + + + Divides a matrix with a scalar. + + The matrix to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of the matrix of the given divisor. + + The matrix whose elements we want to compute the modulus of. + The divisor to use. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the matrix. + + The dividend we want to compute the modulus of. + The matrix whose elements we want to use as divisor. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two matrices. + + The matrix whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a matrix pointwise + + The input matrix + + + + + Computes the exponential of a matrix pointwise + + The input matrix + + + + + Computes the log of a matrix pointwise + + The input matrix + + + + + Computes the log10 of a matrix pointwise + + The input matrix + + + + + Computes the sin of a matrix pointwise + + The input matrix + + + + + Computes the cos of a matrix pointwise + + The input matrix + + + + + Computes the tan of a matrix pointwise + + The input matrix + + + + + Computes the asin of a matrix pointwise + + The input matrix + + + + + Computes the acos of a matrix pointwise + + The input matrix + + + + + Computes the atan of a matrix pointwise + + The input matrix + + + + + Computes the sinh of a matrix pointwise + + The input matrix + + + + + Computes the cosh of a matrix pointwise + + The input matrix + + + + + Computes the tanh of a matrix pointwise + + The input matrix + + + + + Computes the absolute value of a matrix pointwise + + The input matrix + + + + + Computes the floor of a matrix pointwise + + The input matrix + + + + + Computes the ceiling of a matrix pointwise + + The input matrix + + + + + Computes the rounded value of a matrix pointwise + + The input matrix + + + + + Computes the Cholesky decomposition for a matrix. + + The Cholesky decomposition object. + + + + Computes the LU decomposition for a matrix. + + The LU decomposition object. + + + + Computes the QR decomposition for a matrix. + + The type of QR factorization to perform. + The QR decomposition object. + + + + Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. + + The QR decomposition object. + + + + Computes the SVD decomposition for a matrix. + + Compute the singular U and VT vectors or not. + The SVD decomposition object. + + + + Computes the EVD decomposition for a matrix. + + The EVD decomposition object. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The result matrix X. + + + + Converts a matrix to single precision. + + + + + Converts a matrix to double precision. + + + + + Converts a matrix to single precision complex numbers. + + + + + Converts a matrix to double precision complex numbers. + + + + + Gets a single precision complex matrix with the real parts from the given matrix. + + + + + Gets a double precision complex matrix with the real parts from the given matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Existing data may not be all zeros, so clearing may be necessary + if not all of it will be overwritten anyway. + + + + + If existing data is assumed to be all zeros already, + clearing it may be skipped if applicable. + + + + + Allow skipping zero entries (without enforcing skipping them). + When enumerating sparse matrices this can significantly speed up operations. + + + + + Force applying the operation to all fields even if they are zero. + + + + + It is not known yet whether a matrix is symmetric or not. + + + + + A matrix is symmetric + + + + + A matrix is Hermitian (conjugate symmetric). + + + + + A matrix is not symmetric + + + + + Defines an that uses a cancellation token as stop criterion. + + + + + Initializes a new instance of the class. + + + + + Initializes a new instance of the class. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Stop criterion that delegates the status determination to a delegate. + + + + + Create a new instance of this criterion with a custom implementation. + + Custom implementation with the same signature and semantics as the DetermineStatus method. + + + + Determines the status of the iterative calculation by delegating it to the provided delegate. + Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + + + + Clones this criterion and its settings. + + + + + Monitors an iterative calculation for signs of divergence. + + + + + The maximum relative increase the residual may experience without triggering a divergence warning. + + + + + The number of iterations over which a residual increase should be tracked before issuing a divergence warning. + + + + + The status of the calculation + + + + + The array that holds the tracking information. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified maximum + relative increase and the specified minimum number of tracking iterations. + + The maximum relative increase that the residual may experience before a divergence warning is issued. + The minimum number of iterations over which the residual must grow before a divergence warning is issued. + + + + Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. + + Thrown if the Maximum is set to zero or below. + + + + Gets or sets the minimum number of iterations over which the residual must grow before + issuing a divergence warning. + + Thrown if the value is set to less than one. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Detect if solution is diverging + + true if diverging, otherwise false + + + + Gets required history Length + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Defines an that monitors residuals for NaN's. + + + + + The status of the calculation + + + + + The iteration number of the last iteration. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + The base interface for classes that provide stop criteria for iterative calculations. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current IIterationStopCriterion. Status is set to Status field of current object. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + is not a legal value. Status should be set in implementation. + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + To implementers: Invoking this method should not clear the user defined + property values, only the state that is used to track the progress of the + calculation. + + + + Defines the interface for classes that solve the matrix equation Ax = b in + an iterative manner. + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Defines the interface for objects that can create an iterative solver with + specific settings. This interface is used to pass iterative solver creation + setup information around. + + + + + Gets the type of the solver that will be created by this setup object. + + + + + Gets type of preconditioner, if any, that will be created by this setup object. + + + + + Creates the iterative solver to be used. + + + + + Creates the preconditioner to be used by default (can be overwritten). + + + + + Gets the relative speed of the solver. + + Returns a value between 0 and 1, inclusive. + + + + Gets the relative reliability of the solver. + + Returns a value between 0 and 1 inclusive. + + + + The base interface for preconditioner classes. + + + + Preconditioners are used by iterative solvers to improve the convergence + speed of the solving process. Increase in convergence speed + is related to the number of iterations necessary to get a converged solution. + So while in general the use of a preconditioner means that the iterative + solver will perform fewer iterations it does not guarantee that the actual + solution time decreases given that some preconditioners can be expensive to + setup and run. + + + Note that in general changes to the matrix will invalidate the preconditioner + if the changes occur after creating the preconditioner. + + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix on which the preconditioner is based. + + + + Approximates the solution to the matrix equation Mx = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Defines an that monitors the numbers of iteration + steps as stop criterion. + + + + + The default value for the maximum number of iterations the process is allowed + to perform. + + + + + The maximum number of iterations the calculation is allowed to perform. + + + + + The status of the calculation + + + + + Initializes a new instance of the class with the default maximum + number of iterations. + + + + + Initializes a new instance of the class with the specified maximum + number of iterations. + + The maximum number of iterations the calculation is allowed to perform. + + + + Gets or sets the maximum number of iterations the calculation is allowed to perform. + + Thrown if the Maximum is set to a negative value. + + + + Returns the maximum number of iterations to the default. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Iterative Calculation Status + + + + + An iterator that is used to check if an iterative calculation should continue or stop. + + + + + The collection that holds all the stop criteria and the flag indicating if they should be added + to the child iterators. + + + + + The status of the iterator. + + + + + Initializes a new instance of the class with the default stop criteria. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Gets the current calculation status. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual iterators may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Indicates to the iterator that the iterative process has been cancelled. + + + Does not reset the stop-criteria. + + + + + Resets the to the pre-calculation state. + + + + + Creates a deep clone of the current iterator. + + The deep clone of the current iterator. + + + + Defines an that monitors residuals as stop criterion. + + + + + The maximum value for the residual below which the calculation is considered converged. + + + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + The status of the calculation + + + + + The number of iterations since the residuals got below the maximum. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified + maximum residual and minimum number of iterations. + + + The maximum value for the residual below which the calculation is considered converged. + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + Gets or sets the maximum value for the residual below which the calculation is considered + converged. + + Thrown if the Maximum is set to a negative value. + + + + Gets or sets the minimum number of iterations for which the residual has to be + below the maximum before the calculation is considered converged. + + Thrown if the BelowMaximumFor is set to a value less than 1. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Loads the available objects from the specified assembly. + + The assembly which will be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The type in the assembly which should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The of the assembly that should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + + + + A unit preconditioner. This preconditioner does not actually do anything + it is only used when running an without + a preconditioner. + + + + + The coefficient matrix on which this preconditioner operates. + Is used to check dimensions on the different vectors that are processed. + + + + + Initializes the preconditioner and loads the internal data structures. + + + The matrix upon which the preconditioner is based. + + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + If and do not have the same size. + + + - or - + + + If the size of is different the number of rows of the coefficient matrix. + + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Evaluate the row and column at a specific data index. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + + The array containing the row indices of the existing rows. Element "i" of the array gives the index of the + element in the array that is first non-zero element in a row "i". + The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always + given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. + + + + + An array containing the column indices of the non-zero values. Element "j" of the array + is the number of the column in matrix that contains the j-th value in the array. + + + + + Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values + array using the row-major storage mapping described in a compressed sparse row (CSR) format. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Delete value from internal storage + + Index of value in nonZeroValues array + Row number of matrix + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Find item Index in nonZeroValues array + + Matrix row index + Matrix column index + Item index + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Array that contains the indices of the non-zero values. + + + + + Array that contains the non-zero elements of the vector. + + + + + Gets the number of non-zero elements in the vector. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the vector storage format is dense. + + + + + Gets or sets the value at the given index, with range checking. + + + The index of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + The index of the element. + The requested element. + Not range-checked. + + + + Sets the element without range checking. + + The index of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + + Defines the generic class for Vector classes. + + Supported data types are double, single, , and . + + + + The zero value for type T. + + + + + The value of 1.0 for type T. + + + + + Negates vector and save result to + + Target vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar denominator to use. + The vector to store the result of the division. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar numerator to use. + The vector to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Adds a scalar to each element of the vector. + + The scalar to add. + A copy of the vector with the scalar added. + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + If this vector and are not the same size. + + + + Adds another vector to this vector. + + The vector to add to this one. + A new vector containing the sum of both vectors. + If this vector and are not the same size. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Subtracts a scalar from each element of the vector. + + The scalar to subtract. + A new vector containing the subtraction of this vector and the scalar. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Subtracts each element of the vector from a scalar. + + The scalar to subtract from. + A new vector containing the subtraction of the scalar and this vector. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Returns a negated vector. + + The negated vector. + Added as an alternative to the unary negation operator. + + + + Negates vector and save result to + + Target vector + + + + Subtracts another vector from this vector. + + The vector to subtract from this one. + A new vector containing the subtraction of the two vectors. + If this vector and are not the same size. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Return vector with complex conjugate values of the source vector + + Conjugated vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector. + + The scalar to multiply. + A new vector that is the multiplication of the vector and the scalar. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + If this vector and are not the same size. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + If is not of the same size. + + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + If is not of the same size. + If is . + + + + + Divides each element of the vector by a scalar. + + The scalar to divide with. + A new vector that is the division of the vector and the scalar. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar to divide with. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Divides a scalar by each element of the vector. + + The scalar to divide. + A new vector that is the division of the vector and the scalar. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector. + + The vector to pointwise multiply with this one. + A new vector which is the pointwise multiplication of the two vectors. + If this vector and are not the same size. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector. + + The pointwise denominator vector to use. + A new vector which is the pointwise division of the two vectors. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise division. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The matrix to store the result into. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + The vector to store the result into. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise modulus. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise remainder. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Helper function to apply a unary function to a vector. The function + f modifies the vector given to it in place. Before its + called, a copy of the 'this' vector with the same dimension is + first created, then passed to f. The copy is returned as the result + + Function which takes a vector, modifies it in place and returns void + New instance of vector which is the result + + + + Helper function to apply a unary function which modifies a vector + in place. + + Function which takes a vector, modifies it in place and returns void + The vector where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes a scalar and + a vector and modifies the latter in place. A copy of the "this" + vector is therefore first made and then passed to f together with + the scalar argument. The copy is then returned as the result + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The resulting vector + + + + Helper function to apply a binary function which takes a scalar and + a vector, modifies the latter in place and returns void. + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The vector where the result will be placed + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the latter in place. A copy of the "this" vector is + first made and then passed to f together with the other vector. The + copy is then returned as the result + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the second one in place + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + The vector to store the result + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. + + The other vector + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = (sum(abs(this[i])^p))^(1/p) + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + The p value. + This vector normalized to a unit vector with respect to the p-norm. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the value of maximum element. + + The value of maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the value of the minimum element. + + The value of the minimum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Computes the sum of the absolute value of the vector's elements. + + The sum of the absolute value of the vector's elements. + + + + Indicates whether the current object is equal to another object of the same type. + + An object to compare with this object. + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Returns an enumerator that iterates through the collection. + + + A that can be used to iterate through the collection. + + + + + Returns an enumerator that iterates through a collection. + + + An object that can be used to iterate through the collection. + + + + + Returns a string that describes the type, dimensions and shape of this vector. + + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Character to use to print if there is not enough space to print all entries. Typical value: "..". + Character to use to separate two columns on a line. Typical value: " " (2 spaces). + Character to use to separate two rows/lines. Typical value: Environment.NewLine. + Function to provide a string for any given entry value. + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that represents the content of this vector, column by column. + + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector, column by column and with a type header. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Vector class. + + + + + Gets the raw vector data storage. + + + + + Gets the length or number of dimensions of this vector. + + + + Gets or sets the value at the given . + The index of the value to get or set. + The value of the vector at the given . + If is negative or + greater than the size of the vector. + + + Gets the value at the given without range checking.. + The index of the value to get or set. + The value of the vector at the given . + + + Sets the at the given without range checking.. + The index of the value to get or set. + The value to set. + + + + Resets all values to zero. + + + + + Sets all values of a subvector to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Returns a deep-copy clone of the vector. + + A deep-copy clone of the vector. + + + + Set the values of this vector to the given values. + + The array containing the values to use. + If is . + If is not the same size as this vector. + + + + Copies the values of this vector into the target vector. + + The vector to copy elements into. + If is . + If is not the same size as this vector. + + + + Creates a vector containing specified elements. + + The first element to begin copying from. + The number of elements to copy. + A vector containing a copy of the specified elements. + If is not positive or + greater than or equal to the size of the vector. + If + is greater than or equal to the size of the vector. + + If is not positive. + + + + Copies the values of a given vector into a region in this vector. + + The field to start copying to + The number of fields to copy. Must be positive. + The sub-vector to copy from. + If is + + + + Copies the requested elements from this vector to another. + + The vector to copy the elements to. + The element to start copying from. + The element to start copying to. + The number of elements to copy. + + + + Returns the data contained in the vector as an array. + The returned array will be independent from this vector. + A new memory block will be allocated for the array. + + The vector's data as an array. + + + + Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the vector will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Create a matrix based on this vector in column form (one single column). + + + This vector as a column matrix. + + + + + Create a matrix based on this vector in row form (one single row). + + + This vector as a row matrix. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Applies a function to each value of this vector and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value with its result. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value pair of two vectors and replaces the value in the result vector. + + + + + Applies a function to each value pair of two vectors and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two vectors and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two vectors of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Vector containing the same values of . + + This method is included for completeness. + The vector to get the values from. + A vector containing the same values as . + If is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Adds a scalar to each element of a vector. + + The vector to add to. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of a vector. + + The scalar value to add. + The vector to add to. + The result of the addition. + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of a vector. + + The vector to subtract from. + The scalar value to subtract. + The result of the subtraction. + If is . + + + + Subtracts each element of a vector from a scalar. + + The scalar value to subtract from. + The vector to subtract. + The result of the subtraction. + If is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a scalar with a vector. + + The scalar to divide. + The vector. + The result of the division. + If is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Pointwise divides two Vectors. + + The vector to divide. + The other vector. + The result of the division. + If and are not the same size. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the vector. + + The dividend we want to compute the remainder of. + The vector whose elements we want to use as divisor. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two vectors. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a vector pointwise + + The input vector + + + + + Computes the exponential of a vector pointwise + + The input vector + + + + + Computes the log of a vector pointwise + + The input vector + + + + + Computes the log10 of a vector pointwise + + The input vector + + + + + Computes the sin of a vector pointwise + + The input vector + + + + + Computes the cos of a vector pointwise + + The input vector + + + + + Computes the tan of a vector pointwise + + The input vector + + + + + Computes the asin of a vector pointwise + + The input vector + + + + + Computes the acos of a vector pointwise + + The input vector + + + + + Computes the atan of a vector pointwise + + The input vector + + + + + Computes the sinh of a vector pointwise + + The input vector + + + + + Computes the cosh of a vector pointwise + + The input vector + + + + + Computes the tanh of a vector pointwise + + The input vector + + + + + Computes the absolute value of a vector pointwise + + The input vector + + + + + Computes the floor of a vector pointwise + + The input vector + + + + + Computes the ceiling of a vector pointwise + + The input vector + + + + + Computes the rounded value of a vector pointwise + + The input vector + + + + + Converts a vector to single precision. + + + + + Converts a vector to double precision. + + + + + Converts a vector to single precision complex numbers. + + + + + Converts a vector to double precision complex numbers. + + + + + Gets a single precision complex vector with the real parts from the given vector. + + + + + Gets a double precision complex vector with the real parts from the given vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response vector Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response matrix Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor-Response samples as tuples + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor-Response samples as tuples + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response matrix Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Weighted Linear Regression using normal equations. + + List of sample vectors (predictor) together with their response. + List of weights, one for each sample. + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Locally-Weighted Linear Regression using normal equations. + + + + + Locally-Weighted Linear Regression using normal equations. + + + + + First Order AB method(same as Forward Euler) + + Initial value + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Second Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Third Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Fourth Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + ODE Solver Algorithms + + + + + Second Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Second Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems + http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz + + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The lower bound + The upper bound + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems + + + + + Creates BFGS minimizer + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + + Creates a base class for BFGS minimization + + + + + Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima + See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm + Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp + + + + + Finds a minimum of a function by the BFGS quasi-Newton method + This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian + + An initial guess + Evaluates the function at a point + Evaluates the gradient of the function at a point + The minimum found + + + + Objective function with a frozen evaluation that must not be changed from the outside. + + + + Create a new unevaluated and independent copy of this objective function + + + + Objective function with a mutable evaluation. + + + + Create a new independent copy of this objective function, evaluated at the same point. + + + + Get the y-values of the observations. + + + + + Get the values of the weights for the observations. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the values of the parameters. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector. G = J'(y - f(x; p)) + + + + + Get the approximated Hessian matrix. H = J'J + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Get the degree of freedom. + + + + + The scale factor for initial mu + + + + + Non-linear least square fitting by the Levenberg-Marduardt algorithm. + + The objective function, including model, observations, and parameter bounds. + The initial guess values. + The initial damping parameter of mu. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for L2 norm of the residuals. + The max iterations. + The result of the Levenberg-Marquardt minimization + + + + Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm + + + + + + Creates L-BFGS minimizer + + Numbers of gradients and steps to store. + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe + Conditions are + i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) + ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) + where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. + + Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + references: + http://en.wikipedia.org/wiki/Wolfe_conditions + http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + + + Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + + + + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + The upper bound + + + + Creates a base class for minimization + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. + Called fminsearch() in Matlab. A description of the algorithm can be found at + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + or + https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method + + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Evaluate the objective function at each vertex to create a corresponding + list of error values for each vertex + + + + + + + + Check whether the points in the error profile have so little range that we + consider ourselves to have converged + + + + + + + + + Examine all error values to determine the ErrorProfile + + + + + + + Construct an initial simplex, given starting guesses for the constants, and + initial step sizes for each dimension + + + + + + + Test a scaling operation of the high point, and replace it if it is an improvement + + + + + + + + + + + Contract the simplex uniformly around the lowest point + + + + + + + + + Compute the centroid of all points except the worst + + + + + + + + The value of the constant + + + + + Returns the best fit parameters. + + + + + Returns the standard errors of the corresponding parameters + + + + + Returns the y-values of the fitted model that correspond to the independent values. + + + + + Returns the covariance matrix at minimizing point. + + + + + Returns the correlation matrix at minimizing point. + + + + + The stopping threshold for the function value or L2 norm of the residuals. + + + + + The stopping threshold for L2 norm of the change of the parameters. + + + + + The stopping threshold for infinity norm of the gradient. + + + + + The maximum number of iterations. + + + + + The lower bound of the parameters. + + + + + The upper bound of the parameters. + + + + + The scale factors for the parameters. + + + + + Objective function where neither Gradient nor Hessian is available. + + + + + Objective function where the Gradient is available. Greedy evaluation. + + + + + Objective function where the Gradient is available. Lazy evaluation. + + + + + Objective function where the Hessian is available. Greedy evaluation. + + + + + Objective function where the Hessian is available. Lazy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Greedy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Lazy evaluation. + + + + + Objective function where neither first nor second derivative is available. + + + + + Objective function where the first derivative is available. + + + + + Objective function where the first and second derivatives are available. + + + + + objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective function with a user supplied jacobian for nonlinear least squares regression. + + + + + Objective function for nonlinear least squares regression. + The numerical jacobian with accuracy order is used. + + + + + Adapts an objective function with only value implemented + to provide a gradient as well. Gradient calculation is + done using the finite difference method, specifically + forward differences. + + For each gradient computed, the algorithm requires an + additional number of function evaluations equal to the + functions's number of input parameters. + + + + + Set or get the values of the independent variable. + + + + + Set or get the values of the observations. + + + + + Set or get the values of the weights for the observations. + + + + + Get whether parameters are fixed or free. + + + + + Get the number of observations. + + + + + Get the number of unknown parameters. + + + + + Get the degree of freedom + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Set or get the values of the parameters. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector of x and p. + + + + + Get the Hessian matrix of x and p, J'WJ + + + + + Set observed data to fit. + + + + + Set parameters and bounds. + + The initial values of parameters. + The list to the parameters fix or free. + + + + Non-linear least square fitting by the trust region dogleg algorithm. + + + + + The trust region subproblem. + + + + + The stopping threshold for the trust region radius. + + + + + Non-linear least square fitting by the trust-region algorithm. + + The objective model, including function, jacobian, observations, and parameter bounds. + The subproblem + The initial guess values. + The stopping threshold for L2 norm of the residuals. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for trust region radius + The max iterations. + + + + + Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. + + + + + Class to represent a permutation for a subset of the natural numbers. + + + + + Entry _indices[i] represents the location to which i is permuted to. + + + + + Initializes a new instance of the Permutation class. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + + + + Gets the number of elements this permutation is over. + + + + + Computes where permutes too. + + The index to permute from. + The index which is permuted to. + + + + Computes the inverse of the permutation. + + The inverse of the permutation. + + + + Construct an array from a sequence of inversions. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + The set of inversions to construct the permutation from. + A permutation generated from a sequence of inversions. + + + + Construct a sequence of inversions from the permutation. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + A sequence of inversions. + + + + Checks whether the array represents a proper permutation. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + True if represents a proper permutation, false otherwise. + + + + A single-variable polynomial with real-valued coefficients and non-negative exponents. + + + + + The coefficients of the polynomial in a + + + + + Only needed for the ToString method + + + + + Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. + The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. + + + + + Create a zero-polynomial with a coefficient array of the given length. + An array of length N can support polynomials of a degree of at most N-1. + + Length of the coefficient array + + + + Create a zero-polynomial + + + + + Create a constant polynomial. + Example: 3.0 -> "p : x -> 3.0" + + The coefficient of the "x^0" monomial. + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as array + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as enumerable + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k + + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Calculates the complex roots of the Polynomial by eigenvalue decomposition + + a vector of complex numbers with the roots + + + + Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. + + Eigenvalue matrix A + This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix + + + + Addition of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a polynomial and a scalar. + + + + + Subtraction of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a scalar from a polynomial. + + + + + Addition of a polynomial from a scalar. + + + + + Negation of a polynomial. + + + + + Multiplies a polynomial by a polynomial (convolution) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Scales a polynomial by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Scales a polynomial by division by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r + + Left polynomial + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Point-wise division of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Point-wise multiplication of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Division of two polynomials returning the quotient-with-remainder of the two polynomials given + + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Addition of two Polynomials (piecewise) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Scalar value + Polynomial + Resulting Polynomial + + + + Subtraction of two polynomial. + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Subtracts a scalar from a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + Subtracts a polynomial from a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Negates a polynomial. + + Polynomial + Resulting Polynomial + + + + Multiplies a polynomial by a polynomial (convolution). + + Left polynomial + Right polynomial + resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Polynomial + Scalar value + Resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Divides a polynomial by scalar value. + + Polynomial + Scalar value + Resulting Polynomial + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Utilities for working with floating point numbers. + + + + Useful links: + + + http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic + + + http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon + + + + + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The relative accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Checks if a given double values is finite, i.e. neither NaN nor inifnity + + The value to be checked fo finitenes. + + + + The number of binary digits used to represent the binary number for a double precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + The number of binary digits used to represent the binary number for a single precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + On a standard machine this is equivalent to `DoublePrecision`. + + + + + Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + On a standard machine this is equivalent to `PositiveDoublePrecision`. + + + + + The number of significant decimal places of double-precision floating numbers (64 bit). + + + + + The number of significant decimal places of single-precision floating numbers (32 bit). + + + + + Value representing 10 * 2^(-53) = 1.11022302462516E-15 + + + + + Value representing 10 * 2^(-24) = 5.96046447753906E-07 + + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. + + The value. + The value of the number. + + + + Returns a 'directional' long value. This is a long value which acts the same as a double, + e.g. a negative double value will return a negative double value starting at 0 and going + more negative as the double value gets more negative. + + The input double value. + A long value which is roughly the equivalent of the double value. + + + + Returns a 'directional' int value. This is a int value which acts the same as a float, + e.g. a negative float value will return a negative int value starting at 0 and going + more negative as the float value gets more negative. + + The input float value. + An int value which is roughly the equivalent of the double value. + + + + Increments a floating point number to the next bigger number representable by the data type. + + The value which needs to be incremented. + How many times the number should be incremented. + + The incrementation step length depends on the provided value. + Increment(double.MaxValue) will return positive infinity. + + The next larger floating point value. + + + + Decrements a floating point number to the next smaller number representable by the data type. + + The value which should be decremented. + How many times the number should be decremented. + + The decrementation step length depends on the provided value. + Decrement(double.MinValue) will return negative infinity. + + The next smaller floating point value. + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The absolute threshold for to consider it as zero. + Zero if || is smaller than , otherwise. + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero. + + The real number to coerce to zero, if it is almost zero. + Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. + + + + Determines the range of floating point numbers that will match the specified value with the given tolerance. + + The value. + The ulps difference. + + Thrown if is smaller than zero. + + Tuple of the bottom and top range ends. + + + + Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is + always bigger than the value) + + The value. + The ulps difference. + The maximum floating point number which is larger than the given . + + + + Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is + always smaller than the value) + + The value. + The ulps difference. + The minimum floating point number which is smaller than the given . + + + + Determines the range of ulps that will match the specified value with the given tolerance. + + The value. + The relative difference. + + Thrown if is smaller than zero. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Tuple with the number of ULPS between the value and the value - relativeDifference as first, + and the number of ULPS between the value and the value + relativeDifference as second value. + + + + + Evaluates the count of numbers between two double numbers + + The first parameter. + The second parameter. + The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. + The number of floating point values between and . + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive double or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive float or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive double or NaN) + Evaluates the positive epsilon. See also + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive float or NaN) + Evaluates the positive epsilon. See also + + + + + Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + + Positive Machine epsilon + + + + Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + + Machine epsilon + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + Thrown if is smaller than zero. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + + + Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps + between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance + of 1 is passed then the result will be true only if the two numbers have the same binary representation + OR if they are two adjacent numbers that only differ by one step. + + + The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article + at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to + .NET enabled code without using pointers and unsafe code. + + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two vectors and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Support Interface for Precision Operations (like AlmostEquals). + + Type of the implementing class. + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + A norm of this value. + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + The value to compare with. + A norm of the difference between this and the other value. + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + Revision + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + Frees the memory allocated to the MKL memory pool. + + + + + Frees the memory allocated to the MKL memory pool on the current thread. + + + + + Disable the MKL memory pool. May impact performance. + + + + + Retrieves information about the MKL memory pool. + + On output, returns the number of memory buffers allocated. + Returns the number of bytes allocated to all memory buffers. + + + + Enable gathering of peak memory statistics of the MKL memory pool. + + + + + Disable gathering of peak memory statistics of the MKL memory pool. + + + + + Measures peak memory usage of the MKL memory pool. + + Whether the usage counter should be reset. + The peak number of bytes allocated to all memory buffers. + + + + Disable gathering memory usage + + + + + Enable gathering memory usage + + + + + Return peak memory usage + + + + + Return peak memory usage and reset counter + + + + + Consistency vs. performance trade-off between runs on different machines. + + + + Consistent on the same CPU only (maximum performance) + + + Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) + + + Consistent on Intel CPUs supporting SSE2 or later + + + Consistent on Intel CPUs supporting SSE4.2 or later + + + Consistent on Intel CPUs supporting AVX or later + + + Consistent on Intel CPUs supporting AVX2 or later + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + + Helper class to load native libraries depending on the architecture of the OS and process. + + + + + Dictionary of handles to previously loaded libraries, + + + + + Gets a string indicating the architecture and bitness of the current process. + + + + + If the last native library failed to load then gets the corresponding exception + which occurred or null if the library was successfully loaded. + + + + + Load the native library with the given filename. + + The file name of the library to load. + Hint path where to look for the native binaries. Can be null. + True if the library was successfully loaded or if it has already been loaded. + + + + Try to load a native library by providing its name and a directory. + Tries to load an implementation suitable for the current CPU architecture + and process mode if there is a matching subfolder. + + True if the library was successfully loaded or if it has already been loaded. + + + + Try to load a native library by providing the full path including the file name of the library. + + True if the library was successfully loaded or if it has already been loaded. + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + + Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsFFTProviderPath` or the default probing paths. + + + + + Try to use a native provider, if available. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsFFTProvider" environment variable, + or fall back to the best provider. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 + will cause k*k in the Bluestein sequence to overflow (GH-286). + + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Half rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Fully rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Radix-2 Reorder Helper Method + + Sample type + Sample vector + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If calling this method fails, consider to fall back to alternatives like the managed provider. + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + How to transpose a matrix. + + + + + Don't transpose a matrix. + + + + + Transpose a matrix. + + + + + Conjugate transpose a complex matrix. + + If a conjugate transpose is used with a real matrix, then the matrix is just transposed. + + + + Types of matrix norms. + + + + + The 1-norm. + + + + + The Frobenius norm. + + + + + The infinity norm. + + + + + The largest absolute value norm. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + Supported data types are Double, Single, Complex, and Complex32. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiply elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the full QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by QR factor. This is only used for the managed provider and can be + null for the native provider. The native provider uses the Q portion stored in the R matrix. + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + On entry the B matrix; on exit the X matrix. + The number of columns of B. + On exit, the solution matrix. + Rows must be greater or equal to columns. + The type of QR factorization to perform. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Gets or sets the linear algebra provider. + Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsLAProviderPath` or the default probing paths. + + + + + Try to use a native provider, if available. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsLAProvider" environment variable, + or fall back to the best provider. + + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + Hint path where to look for the native binaries + + Sets the desired bit consistency on repeated identical computations on varying CPU architectures, + as a trade-off with performance. + + VML optimal precision and rounding. + VML accuracy mode. + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If calling this method fails, consider to fall back to alternatives like the managed provider. + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Error codes return from the MKL provider. + + + + + Unable to allocate memory. + + + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Error codes return from the native OpenBLAS provider. + + + + + Unable to allocate memory. + + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + Uses and uses the value of + to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + Uses the value of to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + Uses + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + if set to true , the class is thread safe. + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Random number generator using Mersenne Twister 19937 algorithm. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + Uses the value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + A 32-bit combined multiple recursive generator with 2 components of order 3. + + Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. + + + The type bases upon the implementation in the + Boost Random Number Library. + It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on + Wikipedia - Lagged Fibonacci generator. + + + + + Default value for the ShortLag + + + + + Default value for the LongLag + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The ShortLag value + TheLongLag value + + + + Gets the short lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Gets the long lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Stores an array of random numbers + + + + + Stores an index for the random number array element that will be accessed next. + + + + + Fills the array with new unsigned random numbers. + + + Generated random numbers are 32-bit unsigned integers greater than or equal to 0 + and less than or equal to . + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + This class implements extension methods for the System.Random class. The extension methods generate + pseudo-random distributed numbers for types other than double and int32. + + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random bytes. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers within the specified range. + + The random number generator. + The array to fill with random values. + Lower bound, inclusive. + Upper bound, exclusive. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative random number less than . + + The random number generator. + + A 64-bit signed integer greater than or equal to 0, and less than ; that is, + the range of return values includes 0 but not . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int32 range. + + The random number generator. + + A 32-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int64 range. + + The random number generator. + + A 64-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative decimal floating point random number less than 1.0. + + The random number generator. + + A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, + the range of return values includes 0.0 but not 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random boolean. + + The random number generator. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Provides a time-dependent seed value, matching the default behavior of System.Random. + WARNING: There is no randomness in this seed and quick repeated calls can cause + the same seed value. Do not use for cryptography! + + + + + Provides a seed based on time and unique GUIDs. + WARNING: There is only low randomness in this seed, but at least quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. + WARNING: There is only medium randomness in this seed, but quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Base class for random number generators. This class introduces a layer between + and the Math.Net Numerics random number generators to provide thread safety. + When used directly it use the System.Random as random number source. + + + + + Initializes a new instance of the class using + the value of to set whether + the instance is thread safe or not. + + + + + Initializes a new instance of the class. + + if set to true , the class is thread safe. + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The array to fill with random values. + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The size of the array to fill. + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than . + + + + + Returns a random number less then a specified maximum. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + A 32-bit signed integer less than . + is zero or negative. + + + + Returns a random number within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. + + is greater than . + + + + Fills an array with random 32-bit signed integers greater than or equal to zero and less than . + + The array to fill with random values. + + + + Returns an array with random 32-bit signed integers greater than or equal to zero and less than . + + The size of the array to fill. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . + + + + + Returns an infinite sequence of random numbers within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Fills the elements of a specified array of bytes with random numbers. + + An array of bytes to contain random numbers. + is null. + + + + Returns a random number between 0.0 and 1.0. + + A double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 1982 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: + An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 2006 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". + Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. + Xn = a * Xn−3 + c mod 2^32 + http://www.jstatsoft.org/v08/i14/paper + + + + + The default value for X1. + + + + + The default value for X2. + + + + + The default value for the multiplier. + + + + + The default value for the carry over. + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Seed or last but three unsigned random number. + + + + + Last but two unsigned random number. + + + + + Last but one unsigned random number. + + + + + The value of the carry over. + + + + + The multiplier. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Note: must be less than . + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Xoshiro256** pseudo random number generator. + A random number generator based on the class in the .NET library. + + + This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has + excellent(sub-ns) speed, a state space(256 bits) that is large enough + for any parallel application, and it passes all tests we are aware of. + + For generating just floating-point numbers, xoshiro256+ is even faster. + + The state must be seeded so that it is not everywhere zero.If you have + a 64-bit seed, we suggest to seed a splitmix64 generator and use its + output to fill s. + + For further details see: + David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". + https://arxiv.org/abs/1805.01407 + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Splitmix64 RNG. + + RNG state. This can take any value, including zero. + A new random UInt64. + + Splitmix64 produces equidistributed outputs, thus if a zero is generated then the + next zero will be after a further 2^64 outputs. + + + + + Bisection root-finding algorithm. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Algorithm by Brent, Van Wijngaarden, Dekker et al. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Helper method useful for preventing rounding errors. + a*sign(b) + + + + Algorithm by Broyden. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + Relative step size for calculating the Jacobian matrix at first step. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Helper method to calculate an approximation of the Jacobian. + + The function. + The argument (initial guess). + The result (of initial guess). + Relative step size for calculating the Jacobian. + + + + Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 + Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html + + + + + Q and R are transformed variables. + + + + + n^(1/3) - work around a negative double raised to (1/3) + + + + + Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false + + + Detect a range containing at least one root. + The function to detect roots from. + Lower value of the range. + Upper value of the range + The growing factor of research. Usually 1.6. + Maximum number of iterations. Usually 50. + True if the bracketing operation succeeded, false otherwise. + This iterative methods stops when two values with opposite signs are found. + + + + Sorting algorithms for single, tuple and triple lists. + + + + + Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. + + The type of elements in the key list. + List to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a range of a list of keys, in place using the quick sort algorithm. + + The type of element in the list. + List to sort. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the primary list. + The type of elements in the secondary list. + List to sort. + List to sort on duplicate primary items, and permute the same way as the key list. + Comparison, defining the primary sort order. + Comparison, defining the secondary sort order. + + + + Recursive implementation for an in place quick sort on a list. + + The type of the list on which the quick sort is performed. + The list which is sorted using quick sort. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. + + The type of the list on which the quick sort is performed. + The type of the list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. + + The type of the list on which the quick sort is performed. + The type of the first list which is automatically reordered accordingly. + The type of the second list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The first list which is automatically reordered accordingly. + The second list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. + + The type of the primary list. + The type of the secondary list. + The list which is sorted using quick sort. + The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. + The method with which to compare two elements of the primary list. + The method with which to compare two elements of the secondary list. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Performs an in place swap of two elements in a list. + + The type of elements stored in the list. + The list in which the elements are stored. + The index of the first element of the swap. + The index of the second element of the swap. + + + + This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the error function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. + + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of Airy function Ai + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of the Airy function Ai. + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Ai. + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi(z). + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Computes the logarithm of the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The logarithm of the Euler Beta function evaluated at z,w. + If or are not positive. + + + + Computes the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The Euler Beta function evaluated at z,w. + If or are not positive. + + + + Returns the lower incomplete (unregularized) beta function + B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The lower incomplete (unregularized) beta function. + + + + Returns the regularized lower incomplete beta function + I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The regularized lower incomplete beta function. + + + + ************************************** + COEFFICIENTS FOR METHOD ErfImp * + ************************************** + + Polynomial coefficients for a numerator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a denominator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + + ************************************** + COEFFICIENTS FOR METHOD ErfInvImp * + ************************************** + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Calculates the error function. + The value to evaluate. + the error function evaluated at given value. + + + returns 1 if x == double.PositiveInfinity. + returns -1 if x == double.NegativeInfinity. + + + + + Calculates the complementary error function. + The value to evaluate. + the complementary error function evaluated at given value. + + + returns 0 if x == double.PositiveInfinity. + returns 2 if x == double.NegativeInfinity. + + + + + Calculates the inverse error function evaluated at z. + The inverse error function evaluated at given value. + + + returns double.PositiveInfinity if z >= 1.0. + returns double.NegativeInfinity if z <= -1.0. + + + Calculates the inverse error function evaluated at z. + value to evaluate. + the inverse error function evaluated at Z. + + + + Implementation of the error function. + + Where to evaluate the error function. + Whether to compute 1 - the error function. + the error function. + + + Calculates the complementary inverse error function evaluated at z. + The complementary inverse error function evaluated at given value. + We have tested this implementation against the arbitrary precision mpmath library + and found cases where we can only guarantee 9 significant figures correct. + + returns double.PositiveInfinity if z <= 0.0. + returns double.NegativeInfinity if z >= 2.0. + + + calculates the complementary inverse error function evaluated at z. + value to evaluate. + the complementary inverse error function evaluated at Z. + + + + The implementation of the inverse error function. + + First intermediate parameter. + Second intermediate parameter. + Third intermediate parameter. + the inverse error function. + + + + Computes the generalized Exponential Integral function (En). + + The argument of the Exponential Integral function. + Integer power of the denominator term. Generalization index. + The value of the Exponential Integral function. + + This implementation of the computation of the Exponential Integral function follows the derivation in + "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by + Dover Publications, New York), Chapters 6, 7, and 26. + AND + "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 + + + for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. + for 0 < x <= 1 uses Taylor series expansion + + Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. + + + + + Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up + to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. + + A value value! for value > 0 + + If you need to multiply or divide various such factorials, consider using the logarithmic version + instead so you can add instead of multiply and subtract instead of divide, and + then exponentiate the result using . This will also circumvent the problem that + factorials become very large even for small parameters. + + + + + + Computes the factorial of an integer. + + + + + Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. + + A value value! for value > 0 + + + + Computes the binomial coefficient: n choose k. + + A nonnegative value n. + A nonnegative value h. + The binomial coefficient: n choose k. + + + + Computes the natural logarithm of the binomial coefficient: ln(n choose k). + + A nonnegative value n. + A nonnegative value h. + The logarithmic binomial coefficient: ln(n choose k). + + + + Computes the multinomial coefficient: n choose n1, n2, n3, ... + + A nonnegative value n. + An array of nonnegative values that sum to . + The multinomial coefficient. + if is . + If or any of the are negative. + If the sum of all is not equal to . + + + + The order of the approximation. + + + + + Auxiliary variable when evaluating the function. + + + + + Polynomial coefficients for the approximation. + + + + + Computes the logarithm of the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. + + + + + Computes the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + + Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. + + + + + Returns the upper incomplete regularized gamma function + Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete regularized gamma function. + + + + Returns the upper incomplete gamma function + Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete gamma function. + + + + Returns the lower incomplete gamma function + gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the lower incomplete regularized gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the inverse P^(-1) of the regularized lower incomplete gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, + such that P^(-1)(a,P(a,x)) == x. + + + + + Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. + This implementation is based on + Jose Bernardo + Algorithm AS 103: + Psi ( Digamma ) Function, + Applied Statistics, + Volume 25, Number 3, 1976, pages 315-317. + Using the modifications as in Tom Minka's lightspeed toolbox. + + The argument of the digamma function. + The value of the DiGamma function at . + + + + Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will + only return solutions that are positive. + This implementation is based on the bisection method. + + The argument of the inverse digamma function. + The positive solution to the inverse DiGamma function at . + + + + Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Rising Factorial for x and n + + + + Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Falling Factorial for x and n + + + + A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. + This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation + see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function + + The list of coefficients in the numerator + The list of coefficients in the denominator + The variable in the power series + The value of the Generalized HyperGeometric Function. + + + + Returns the Hankel function of the first kind. + HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the first kind. + + + + Returns the exponentially scaled Hankel function of the first kind. + ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the first kind. + + + + Returns the Hankel function of the second kind. + HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the second kind. + + + + Returns the exponentially scaled Hankel function of the second kind. + ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the second kind. + + + + Computes the 'th Harmonic number. + + The Harmonic number which needs to be computed. + The t'th Harmonic number. + + + + Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) + + The order parameter. + The power parameter. + General Harmonic number. + + + + Returns the Kelvin function of the first kind. + KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function of the first kind. + + + + Returns the Kelvin function ber. + KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function ber. + KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(x) is equivalent to KelvinBer(0, x). + + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function bei. + KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the Kelvin function bei. + KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBei(x) is equivalent to KelvinBei(0, x). + + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the derivative of the Kelvin function ber. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function ber + + + + Returns the derivative of the Kelvin function ber. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ber. + + + + Returns the derivative of the Kelvin function bei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function bei. + + + + Returns the derivative of the Kelvin function bei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function bei. + + + + Returns the Kelvin function of the second kind + KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). + KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + The order of the Kelvin function. + The value to calculate the kelvin function of, + + + + + Returns the Kelvin function ker. + KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function ker. + KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKer(x) is equivalent to KelvinKer(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function kei. + KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the Kelvin function kei. + KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKei(x) is equivalent to KelvinKei(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the derivative of the Kelvin function ker. + + The order of the Kelvin function. + The non-negative real value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function ker. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function kei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Returns the derivative of the Kelvin function kei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic + + The parameter for which to compute the logistic function. + The logistic function of . + + + + Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit + + The parameter for which to compute the logit function. This number should be + between 0 and 1. + The logarithm of divided by 1.0 - . + + + + ************************************** + COEFFICIENTS FOR METHODS bessi0 * + ************************************** + + Chebyshev coefficients for exp(-x) I0(x) + in the interval [0, 8]. + + lim(x->0){ exp(-x) I0(x) } = 1. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I0(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessi1 * + ************************************** + + Chebyshev coefficients for exp(-x) I1(x) / x + in the interval [0, 8]. + + lim(x->0){ exp(-x) I1(x) / x } = 1/2. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I1(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk0, bessk0e * + ************************************** + + Chebyshev coefficients for K0(x) + log(x/2) I0(x) + in the interval [0, 2]. The odd order coefficients are all + zero; only the even order coefficients are listed. + + lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. + + + + Chebyshev coefficients for exp(x) sqrt(x) K0(x) + in the inverted interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk1, bessk1e * + ************************************** + + Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) + in the interval [0, 2]. + + lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. + + + + Chebyshev coefficients for exp(x) sqrt(x) K1(x) + in the interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). + + + + Returns the modified Bessel function of first kind, order 0 of the argument. +

+ The function is defined as i0(x) = j0( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of first kind, + order 1 of the argument. +

+ The function is defined as i1(x) = -i j1( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of the second kind + of order 0 of the argument. +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 0 of the argument. + + The value to compute the Bessel function of. + + + + Returns the modified Bessel function of the second kind + of order 1 of the argument. +

+ The range is partitioned into the two intervals [0, 2] and + (2, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 1 of the argument. +

+ k1e(x) = exp(x) * k1(x). +

+ The value to compute the Bessel function of. + +
+ + + Returns the modified Struve function of order 0. + + The value to compute the function of. + + + + Returns the modified Struve function of order 1. + + The value to compute the function of. + + + + Returns the difference between the Bessel I0 and Struve L0 functions. + + The value to compute the function of. + + + + Returns the difference between the Bessel I1 and Struve L1 functions. + + The value to compute the function of. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Numerically stable exponential minus one, i.e. x -> exp(x)-1 + + A number specifying a power. + Returns exp(power)-1. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Evaluation functions, useful for function approximation. + + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Numerically stable series summation + + provides the summands sequentially + Sum + + + Evaluates the series of Chebyshev polynomials Ti at argument x/2. + The series is given by +
+                  N-1
+                   - '
+            y  =   >   coef[i] T (x/2)
+                   -            i
+                  i=0
+            
+ Coefficients are stored in reverse order, i.e. the zero + order term is last in the array. Note N is the number of + coefficients, not the order. +

+ If coefficients are for the interval a to b, x must + have been transformed to x -> 2(2x - b - a)/(b-a) before + entering the routine. This maps x from (a, b) to (-1, 1), + over which the Chebyshev polynomials are defined. +

+ If the coefficients are for the inverted interval, in + which (a, b) is mapped to (1/b, 1/a), the transformation + required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, + this becomes x -> 4a/x - 1. +

+ SPEED: +

+ Taking advantage of the recurrence properties of the + Chebyshev polynomials, the routine requires one more + addition per loop than evaluating a nested polynomial of + the same degree. +

+ The coefficients of the polynomial. + Argument to the polynomial. + + Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs +

+ Marked as Deprecated in + http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html + + + +

+ Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. + + The no. of terms in the sequence. + The coefficients of the Chebyshev series, length n+1. + The value at which the series is to be evaluated. + + ORIGINAL AUTHOR: + Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND + REFERENCES: + "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" + J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 + +
+ + + Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. + This function has a global minimum at (1,1) with f(1,1) = 0. + Common range: [-5,10] or [-2.048,2.048]. + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Valley-shaped Rosenbrock function for 2 or more dimensions. + This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 + This function has 4 global minima with f(x,y) = 0. + Common range: [-6,6]. + Named after David Mautner Himmelblau + + + https://en.wikipedia.org/wiki/Himmelblau%27s_function + + + + + Rastrigin, a highly multi-modal function with many local minima. + Global minimum of all zeros with f(0) = 0. + Common range: [-5.12,5.12]. + + + https://en.wikipedia.org/wiki/Rastrigin_function + http://www.sfu.ca/~ssurjano/rastr.html + + + + + Drop-Wave, a multi-modal and highly complex function with many local minima. + Global minimum of all zeros with f(0) = -1. + Common range: [-5.12,5.12]. + + + http://www.sfu.ca/~ssurjano/drop.html + + + + + Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. + Global minimum of all zeros with f(0) = 0. + Common range: [-32.768, 32.768]. + + + http://www.sfu.ca/~ssurjano/ackley.html + + + + + Bowl-shaped first Bohachevsky function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-100, 100] + + + http://www.sfu.ca/~ssurjano/boha.html + + + + + Plate-shaped Matyas function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-10, 10]. + + + http://www.sfu.ca/~ssurjano/matya.html + + + + + Valley-shaped six-hump camel back function. + Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). + Common range: x in [-3,3], y in [-2,2]. + + + http://www.sfu.ca/~ssurjano/camel6.html + + + + + Statistics operating on arrays assumed to be unsorted. + WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. + + + + + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + A class with correlation measures between two datasets. + + + + + Auto-correlation function (ACF) based on FFT for all possible lags k. + + Data array to calculate auto correlation for. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. + + The data array to calculate auto correlation for. + Max lag to calculate ACF for must be positive and smaller than x.Length. + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function based on FFT for lags k. + + The data array to calculate auto correlation for. + Array with lags to calculate ACF for. + An array with the ACF as a function of the lags k. + + + + The internal method for calculating the auto-correlation. + + The data array to calculate auto-correlation for + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length + Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length + An array with the ACF as a function of the lags k. + + + + Computes the Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + The Pearson product-moment correlation coefficient. + + + + Computes the Weighted Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + Corresponding weights of data. + The Weighted Pearson product-moment correlation coefficient. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Array of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Enumerable of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Spearman Ranked Correlation coefficient. + + Sample data series A. + Sample data series B. + The Spearman ranked correlation coefficient. + + + + Computes the Spearman Ranked Correlation matrix. + + Array of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the Spearman Ranked Correlation matrix. + + Enumerable of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the basic statistics of data set. The class meets the + NIST standard of accuracy for mean, variance, and standard deviation + (the only statistics they provide exact values for) and exceeds them + in increased accuracy mode. + Recommendation: consider to use RunningStatistics instead. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Gets the size of the sample. + + The size of the sample. + + + + Gets the sample mean. + + The sample mean. + + + + Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). + + The sample variance. + + + + Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). + + The sample standard deviation. + + + + Gets the sample skewness. + + The sample skewness. + Returns zero if is less than three. + + + + Gets the sample kurtosis. + + The sample kurtosis. + Returns zero if is less than four. + + + + Gets the maximum sample value. + + The maximum sample value. + + + + Gets the minimum sample value. + + The minimum sample value. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Internal use. Method use for setting the statistics. + + For setting Mean. + For setting Variance. + For setting Skewness. + For setting Kurtosis. + For setting Minimum. + For setting Maximum. + For setting Count. + + + + A consists of a series of s, + each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + This IComparer performs comparisons between a point and a bucket. + + + + + Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. + + The first bucket to compare. + The second bucket to compare. + -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. + + + + Lower Bound of the Bucket. + + + + + Upper Bound of the Bucket. + + + + + The number of datapoints in the bucket. + + + Value may be NaN if this was constructed as a argument. + + + + + Initializes a new instance of the Bucket class. + + + + + Constructs a Bucket that can be used as an argument for a + like when performing a Binary search. + + Value to look for + + + + Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. + + A cloned Bucket object. + + + + Width of the Bucket. + + + + + True if this is a single point argument for + when performing a Binary search. + + + + + Default comparer. + + + + + This method check whether a point is contained within this bucket. + + The point to check. + + 0 if the point falls within the bucket boundaries; + -1 if the point is smaller than the bucket, + +1 if the point is larger than the bucket. + + + + Comparison of two disjoint buckets. The buckets cannot be overlapping. + + + 0 if UpperBound and LowerBound are bit-for-bit equal + 1 if This bucket is lower that the compared bucket + -1 otherwise + + + + + Checks whether two Buckets are equal. + + + UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a + difference in Count given by . + + + + + Provides a hash code for this bucket. + + + + + Formats a human-readable string for this bucket. + + + + + A class which computes histograms of data. + + + + + Contains all the Buckets of the Histogram. + + + + + Indicates whether the elements of buckets are currently sorted. + + + + + Initializes a new instance of the Histogram class. + + + + + Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram + will be set to the smallest and largest datapoint. + + The data sequence to build a histogram on. + The number of buckets to use. + + + + Constructs a Histogram with a specific number of equally sized buckets. + + The data sequence to build a histogram on. + The number of buckets to use. + The histogram lower bound. + The histogram upper bound. + + + + Add one data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The datapoint which we want to add. + + + + Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The sequence of datapoints which we want to add. + + + + Adds a Bucket to the Histogram. + + + + + Sort the buckets if needed. + + + + + Returns the Bucket that contains the value v. + + The point to search the bucket for. + A copy of the bucket containing point . + + + + Returns the index in the Histogram of the Bucket + that contains the value v. + + The point to search the bucket index for. + The index of the bucket containing the point. + + + + Returns the lower bound of the histogram. + + + + + Returns the upper bound of the histogram. + + + + + Gets the n'th bucket. + + The index of the bucket to be returned. + A copy of the n'th bucket. + + + + Gets the number of buckets. + + + + + Gets the total number of datapoints in the histogram. + + + + + Prints the buckets contained in the . + + + + + Kernel density estimation (KDE). + + + + + Estimate the probability density function of a random variable. + + + The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. + + + + + Estimate the probability density function of a random variable with a Gaussian kernel. + + + + + Estimate the probability density function of a random variable with an Epanechnikov kernel. + The Epanechnikov kernel is optimal in a mean square error sense. + + + + + Estimate the probability density function of a random variable with a uniform kernel. + + + + + Estimate the probability density function of a random variable with a triangular kernel. + + + + + A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). + This kernel is the default. + + + + + Epanechnikov Kernel: + x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 + + + + + Uniform Kernel: + x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 + + + + + Triangular Kernel: + x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 + + + + + A hybrid Monte Carlo sampler for multivariate distributions. + + + + + Number of parameters in the density function. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of different components of the + momentum. + + + + + Gets or sets the standard deviations used in the sampling of different components of the + momentum. + + When the length of pSdv is not the same as Length. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + 1 using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the a random number generator provided by the user. + A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviations + given by pSdv. This constructor will set the burn interval, the method used for + numerical differentiation and the random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + The method used for numerical differentiation. + When the number of burnInterval iteration is negative. + When the length of pSdv is not the same as x0. + + + + Initialize parameters. + + The current location of the sampler. + + + + Checking that the location and the momentum are of the same dimension and that each component is positive. + + The standard deviations used for sampling the momentum. + When the length of pSdv is not the same as Length or if any + component is negative. + When pSdv is null. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the gradient. Uses a simple three point estimation. + + Function which the gradient is to be evaluated. + The location where the gradient is to be evaluated. + The gradient of the function at the point x. + + + + The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set + of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as + a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used + to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler + (). + + The type of samples this sampler produces. + + + + The delegate type that defines a derivative evaluated at a certain point. + + Function to be differentiated. + Value where the derivative is computed. + + + + Evaluates the energy function of the target distribution. + + + + + The current location of the sampler. + + + + + The number of burn iterations between two samples. + + + + + The size of each step in the Hamiltonian equation. + + + + + The number of iterations in the Hamiltonian equation. + + + + + The algorithm used for differentiation. + + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the number of iterations in the Hamiltonian equation. + + When frog leap steps is negative or zero. + + + + Gets or sets the size of each step in the Hamiltonian equation. + + When step size is negative or zero. + + + + Constructs a new Hybrid Monte Carlo sampler. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + Random number generator used for sampling the momentum. + The method used for differentiation. + When the number of burnInterval iteration is negative. + When either x0, pdfLnP or diff is null. + + + + Returns a sample from the distribution P. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Method used to update the sample location. Used in the end of the loop. + + The old energy. + The old gradient/derivative of the energy. + The new sample. + The new gradient/derivative of the energy. + The new energy. + The difference between the old Hamiltonian and new Hamiltonian. Use to determine + if an update should take place. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Method for doing dot product. + + First vector/scalar in the product. + Second vector/scalar in the product. + + + + Method for adding, multiply the second vector/scalar by factor and then + add it to the first vector/scalar. + + First vector/scalar. + Scalar factor multiplying by the second vector/scalar. + Second vector/scalar. + + + + Multiplying the second vector/scalar by factor and then subtract it from + the first vector/scalar. + + First vector/scalar. + Scalar factor to be multiplied to the second vector/scalar. + Second vector/scalar. + + + + Method for sampling a random momentum. + + Momentum to be randomized. + + + + The Hamiltonian equations that is used to produce the new sample. + + + + + Method to compute the Hamiltonian used in the method. + + The momentum. + The energy. + Hamiltonian=E+p.p/2 + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than or equal to zero. + Throws when value is negative. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than to zero. + Throws when value is negative or zero. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than zero. + Throws when value is negative or zero. + + + + Provides utilities to analysis the convergence of a set of samples from + a . + + + + + Computes the auto correlations of a series evaluated by a function f. + + The series for computing the auto correlation. + The lag in the series + The function used to evaluate the series. + The auto correlation. + Throws if lag is zero or if lag is + greater than or equal to the length of Series. + + + + Computes the effective size of the sample when evaluated by a function f. + + The samples. + The function use for evaluating the series. + The effective size when auto correlation is taken into account. + + + + A method which samples datapoints from a proposal distribution. The implementation of this sampler + is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it doesn't take any parameters; it samples random + variables from the whole domain. + + The type of the datapoints. + A sample from the proposal distribution. + + + + A method which samples datapoints from a proposal distribution given an initial sample. The implementation + of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it samples locally around an initial point. In other words, it + makes a small local move rather than producing a global sample from the proposal. + + The type of the datapoints. + The initial sample. + A sample from the proposal distribution. + + + + A function which evaluates a density. + + The type of data the distribution is over. + The sample we want to evaluate the density for. + + + + A function which evaluates a log density. + + The type of data the distribution is over. + The sample we want to evaluate the log density for. + + + + A function which evaluates the log of a transition kernel probability. + + The type for the space over which this transition kernel is defined. + The new state in the transition. + The previous state in the transition. + The log probability of the transition. + + + + The interface which every sampler must implement. + + The type of samples this sampler produces. + + + + The random number generator for this class. + + + + + Keeps track of the number of accepted samples. + + + + + Keeps track of the number of calls to the proposal sampler. + + + + + Initializes a new instance of the class. + + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Gets or sets the random number generator. + + When the random number generator is null. + + + + Returns one sample. + + + + + Returns a number of samples. + + The number of samples we want. + An array of samples. + + + + Gets the acceptance rate of the sampler. + + + + + Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the + proposal distribution Q is symmetric in comparison to . It does need to + be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. + + The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the target distribution. + + + + + Evaluates the log transition probability for the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis-Hastings sampler using the default random number generator. This + constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + The log transition probability for the proposal distribution. + A method that samples from the proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal + distribution Q is symmetric. All densities are required to be in log space. + + The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the sampling distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis sampler using the default random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + A method that samples from the symmetric proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to + to be normalized, but we do need that for each x, P(x) < Q(x). + + The type of samples this sampler produces. + + + + Evaluates the density function of the sampling distribution. + + + + + Evaluates the density function of the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + Constructs a new rejection sampler using the default random number generator. + + The density of the distribution we want to sample from. + The density of the proposal distribution. + A method that samples from the proposal distribution. + + + + Returns a sample from the distribution P. + + When the algorithms detects that the proposal + distribution doesn't upper bound the target distribution. + + + + A hybrid Monte Carlo sampler for univariate distributions. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of the + momentum. + + + + + Gets or sets the standard deviation used in the sampling of the + momentum. + + When standard deviation is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using a random + number generator provided by the user. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + Random number generator used to sample the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + given by pSdv using a random + number generator provided by the user. This constructor will set both the burn interval and the method used for + numerical differentiation. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + The method used for numerical differentiation. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the derivative. Uses a simple three point estimation. + + Function for which the derivative is to be evaluated. + The location where the derivative is to be evaluated. + The derivative of the function at the point x. + + + + Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using + a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. + + The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + + + + Evaluates the log density function of the target distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + The scale of the slice sampler. + + + + + Constructs a new Slice sampler using the default random + number generator. The burn interval will be set to 0. + + The initial sample. + The density of the distribution we want to sample from. + The scale factor of the slice sampler. + When the scale of the slice sampler is not positive. + + + + Constructs a new slice sampler using the default random number generator. It + will set the number of burnInterval iterations and run a burnInterval phase. + + The initial sample. + The density of the distribution we want to sample from. + The number of iterations in between returning samples. + The scale factor of the slice sampler. + When the number of burnInterval iteration is negative. + When the scale of the slice sampler is not positive. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the scale of the slice sampler. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Running statistics over a window of data, allows updating by adding values. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + Replace ties with their mean (non-integer ranks). Default. + + + Replace ties with their minimum (typical sports ranking). + + + Replace ties with their maximum. + + + Permutation with increasing values at each index of ties. + + + + Running statistics accumulator, allows updating by adding values + or by combining two accumulators. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Evaluates the population skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + + + + Evaluates the population kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + + Create a new running statistics over the combined samples of two existing running statistics. + + + + + Statistics operating on an array already sorted ascendingly. + + + + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Extension methods to return basic statistics on set of data. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + The full population data. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + The full population data. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. + Uses a normalizer (Bessel's correction; type 2). + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness and kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + + The full population data. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + The full population data. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + Null-entries are ignored. + + The data sample sequence. + + + + Evaluates the sample mean over a moving window, for each samples. + Returns NaN if no data is empty or if any entry is NaN. + + The sample stream to calculate the mean of. + The number of last samples to consider. + + + + Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. + Can be used in a streaming way, e.g. on large datasets not fitting into memory. + + + + + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Calculates the entropy of a stream of double values. + Returns NaN if any of the values in the stream are NaN. + + The input stream to evaluate. + + + + + Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. + + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The body to be invoked for each iteration range. + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The partition size for splitting work into smaller pieces. + The body to be invoked for each iteration range. + + + + Executes each of the provided actions inside a discrete, asynchronous task. + + An array of actions to execute. + The actions array contains a null element. + At least one invocation of the actions threw an exception. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Double-precision trigonometry toolkit. + + + + + Constant to convert a degree to grad. + + + + + Converts a degree (360-periodic) angle to a grad (400-periodic) angle. + + The degree to convert. + The converted grad angle. + + + + Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. + + The degree to convert. + The converted radian angle. + + + + Converts a grad (400-periodic) angle to a degree (360-periodic) angle. + + The grad to convert. + The converted degree. + + + + Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. + + The grad to convert. + The converted radian. + + + + Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. + + The radian to convert. + The converted degree. + + + + Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. + + The radian to convert. + The converted grad. + + + + Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). + + + + + Trigonometric Sine of an angle in radian, or opposite / hypotenuse. + + The angle in radian. + The sine of the radian angle. + + + + Trigonometric Sine of a Complex number. + + The complex value. + The sine of the complex number. + + + + Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. + + The angle in radian. + The cosine of an angle in radian. + + + + Trigonometric Cosine of a Complex number. + + The complex value. + The cosine of a complex number. + + + + Trigonometric Tangent of an angle in radian, or opposite / adjacent. + + The angle in radian. + The tangent of the radian angle. + + + + Trigonometric Tangent of a Complex number. + + The complex value. + The tangent of the complex number. + + + + Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. + + The angle in radian. + The cotangent of an angle in radian. + + + + Trigonometric Cotangent of a Complex number. + + The complex value. + The cotangent of the complex number. + + + + Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. + + The angle in radian. + The secant of the radian angle. + + + + Trigonometric Secant of a Complex number. + + The complex value. + The secant of the complex number. + + + + Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. + + The angle in radian. + Cosecant of an angle in radian. + + + + Trigonometric Cosecant of a Complex number. + + The complex value. + The cosecant of a complex number. + + + + Trigonometric principal Arc Sine in radian + + The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Sine of this Complex number. + + The complex value. + The arc sine of a complex number. + + + + Trigonometric principal Arc Cosine in radian + + The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Cosine of this Complex number. + + The complex value. + The arc cosine of a complex number. + + + + Trigonometric principal Arc Tangent in radian + + The opposite for a unit adjacent (i.e. opposite / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Tangent of this Complex number. + + The complex value. + The arc tangent of a complex number. + + + + Trigonometric principal Arc Cotangent in radian + + The adjacent for a unit opposite (i.e. adjacent / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cotangent of this Complex number. + + The complex value. + The arc cotangent of a complex number. + + + + Trigonometric principal Arc Secant in radian + + The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Secant of this Complex number. + + The complex value. + The arc secant of a complex number. + + + + Trigonometric principal Arc Cosecant in radian + + The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cosecant of this Complex number. + + The complex value. + The arc cosecant of a complex number. + + + + Hyperbolic Sine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic sine of the angle. + + + + Hyperbolic Sine of a Complex number. + + The complex value. + The hyperbolic sine of a complex number. + + + + Hyperbolic Cosine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic Cosine of the angle. + + + + Hyperbolic Cosine of a Complex number. + + The complex value. + The hyperbolic cosine of a complex number. + + + + Hyperbolic Tangent in radian + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic tangent of the angle. + + + + Hyperbolic Tangent of a Complex number. + + The complex value. + The hyperbolic tangent of a complex number. + + + + Hyperbolic Cotangent + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cotangent of the angle. + + + + Hyperbolic Cotangent of a Complex number. + + The complex value. + The hyperbolic cotangent of a complex number. + + + + Hyperbolic Secant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic secant of the angle. + + + + Hyperbolic Secant of a Complex number. + + The complex value. + The hyperbolic secant of a complex number. + + + + Hyperbolic Cosecant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cosecant of the angle. + + + + Hyperbolic Cosecant of a Complex number. + + The complex value. + The hyperbolic cosecant of a complex number. + + + + Hyperbolic Area Sine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Sine of this Complex number. + + The complex value. + The hyperbolic arc sine of a complex number. + + + + Hyperbolic Area Cosine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosine of this Complex number. + + The complex value. + The hyperbolic arc cosine of a complex number. + + + + Hyperbolic Area Tangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Tangent of this Complex number. + + The complex value. + The hyperbolic arc tangent of a complex number. + + + + Hyperbolic Area Cotangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cotangent of this Complex number. + + The complex value. + The hyperbolic arc cotangent of a complex number. + + + + Hyperbolic Area Secant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Secant of this Complex number. + + The complex value. + The hyperbolic arc secant of a complex number. + + + + Hyperbolic Area Cosecant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosecant of this Complex number. + + The complex value. + The hyperbolic arc cosecant of a complex number. + + + + Hamming window. Named after Richard Hamming. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hamming window. Named after Richard Hamming. + Periodic version, useful e.g. for FFT purposes. + + + + + Hann window. Named after Julius von Hann. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hann window. Named after Julius von Hann. + Periodic version, useful e.g. for FFT purposes. + + + + + Cosine window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Cosine window. + Periodic version, useful e.g. for FFT purposes. + + + + + Lanczos window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Lanczos window. + Periodic version, useful e.g. for FFT purposes. + + + + + Gauss window. + + + + + Blackman window. + + + + + Blackman-Harris window. + + + + + Blackman-Nuttall window. + + + + + Bartlett window. + + + + + Bartlett-Hann window. + + + + + Nuttall window. + + + + + Flat top window. + + + + + Uniform rectangular (Dirichlet) window. + + + + + Triangular window. + + + + + Tukey tapering window. A rectangular window bounded + by half a cosine window on each side. + + Width of the window + Fraction of the window occupied by the cosine parts + +
+
diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.dll new file mode 100755 index 0000000..506eade Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.dll differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.xml new file mode 100755 index 0000000..4652128 --- /dev/null +++ b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard1.3/MathNet.Numerics.xml @@ -0,0 +1,53895 @@ + + + + MathNet.Numerics + + + + + Useful extension methods for Arrays. + + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Enumerative Combinatorics and Counting. + + + + + Count the number of possible variations without repetition. + The order matters and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of distinct variations. + + + + Count the number of possible variations with repetition. + The order matters and each object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of distinct variations with repetition. + + + + Count the number of possible combinations without repetition. + The order does not matter and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of combinations. + + + + Count the number of possible combinations with repetition. + The order does not matter and an object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of combinations with repetition. + + + + Count the number of possible permutations (without repetition). + + Number of (distinguishable) elements in the set. + Maximum number of permutations without repetition. + + + + Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. + Implemented using Fisher-Yates Shuffling. + + An array of length N that contains (in any order) the integers of the interval [0, N). + Number of (distinguishable) elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation, without repetition, from a data array by reordering the provided array in-place. + Implemented using Fisher-Yates Shuffling. The provided data array will be modified. + + The data array to be reordered. The array will be modified by this routine. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation from a data sequence by returning the provided data in random order. + Implemented using Fisher-Yates Shuffling. + + The data elements to be reordered. + The random number generator to use. Optional; the default random source will be used if null. + + + + Generate a random combination, without repetition, by randomly selecting some of N elements. + + Number of elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Generate a random combination, without repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Select a random combination, without repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination, in the original order. + + + + Generates a random combination, with repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + Integer mask array of length N, for each item the number of times it was selected. + + + + Select a random combination, with repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination with repetition, in the original order. + + + + Generate a random variation, without repetition, by randomly selecting k of n elements with order. + Implemented using partial Fisher-Yates Shuffling. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. + Implemented using partial Fisher-Yates Shuffling. + + The data source to choose from. + Number of elements (k) to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation, in random order. + + + + Generate a random variation, with repetition, by randomly selecting k of n elements with order. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation with repetition, in random order. + + + + 32-bit single precision complex numbers class. + + + + The class Complex32 provides all elementary operations + on complex numbers. All the operators +, -, + *, /, ==, != are defined in the + canonical way. Additional complex trigonometric functions + are also provided. Note that the Complex32 structures + has two special constant values and + . + + + + Complex32 x = new Complex32(1f,2f); + Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); + Complex32 z = (x + y) / (x - y); + + + + For mathematical details about complex numbers, please + have a look at the + Wikipedia + + + + + + The real component of the complex number. + + + + + The imaginary component of the complex number. + + + + + Initializes a new instance of the Complex32 structure with the given real + and imaginary parts. + + The value for the real component. + The value for the imaginary component. + + + + Creates a complex number from a point's polar coordinates. + + A complex number. + The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. + The phase, which is the angle from the line to the horizontal axis, measured in radians. + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to one and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to one. + + + + + Returns a new instance + with real and imaginary numbers positive infinite. + + + + + Returns a new instance + with real and imaginary numbers not a number. + + + + + Gets the real component of the complex number. + + The real component of the complex number. + + + + Gets the real imaginary component of the complex number. + + The real imaginary component of the complex number. + + + + Gets the phase or argument of this Complex32. + + + Phase always returns a value bigger than negative Pi and + smaller or equal to Pi. If this Complex32 is zero, the Complex32 + is assumed to be positive real with an argument of zero. + + The phase or argument of this Complex32 + + + + Gets the magnitude (or absolute value) of a complex number. + + Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN + The magnitude of the current instance. + + + + Gets the squared magnitude (or squared absolute value) of a complex number. + + The squared magnitude of the current instance. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex32. + + + + Gets a value indicating whether the Complex32 is zero. + + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + + true if this instance is ; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + + true if this instance is real nonnegative number; otherwise, false. + + + + + Exponential of this Complex32 (exp(x), E^x). + + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex32 (Base E). + + The natural logarithm of this complex number. + + + + Common Logarithm of this Complex32 (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex32 with custom base. + + The logarithm of this complex number. + + + + Raise this Complex32 to the given value. + + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex32 to the inverse of the given value. + + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex32 + + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex32 + + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex32. + + + + + Evaluate all cubic roots of this Complex32. + + + + + Equality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real and imaginary components of the two complex numbers are equal; false otherwise. + + + + Inequality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real or imaginary components of the two complex numbers are not equal; false otherwise. + + + + Unary addition. + + The complex number to operate on. + Returns the same complex number. + + + + Unary minus. + + The complex number to operate on. + The negated value of the . + + + Addition operator. Adds two complex numbers together. + The result of the addition. + One of the complex numbers to add. + The other complex numbers to add. + + + Subtraction operator. Subtracts two complex numbers. + The result of the subtraction. + The complex number to subtract from. + The complex number to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The complex numbers to add. + The float value to add. + + + Subtraction operator. Subtracts float value from a complex value. + The result of the subtraction. + The complex number to subtract from. + The float value to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The float value to add. + The complex numbers to add. + + + Subtraction operator. Subtracts complex value from a float value. + The result of the subtraction. + The float vale to subtract from. + The complex value to subtract. + + + Multiplication operator. Multiplies two complex numbers. + The result of the multiplication. + One of the complex numbers to multiply. + The other complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The float value to multiply. + The complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The complex number to multiply. + The float value to multiply. + + + Division operator. Divides a complex number by another. + Enhanced Smith's algorithm for dividing two complex numbers + + The result of the division. + The dividend. + The divisor. + + + + Helper method for dividing. + + Re first + Im first + Re second + Im second + + + + + Division operator. Divides a float value by a complex number. + Algorithm based on Smith's algorithm + + The result of the division. + The dividend. + The divisor. + + + Division operator. Divides a complex number by a float value. + The result of the division. + The dividend. + The divisor. + + + + Computes the conjugate of a complex number and returns the result. + + + + + Returns the multiplicative inverse of a complex number. + + + + + Converts the value of the current complex number to its equivalent string representation in Cartesian form. + + The string representation of the current instance in Cartesian form. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format for its real and imaginary parts. + + The string representation of the current instance in Cartesian form. + A standard or custom numeric format string. + + is not a valid format string. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified culture-specific formatting information. + + The string representation of the current instance in Cartesian form, as specified by . + An object that supplies culture-specific formatting information. + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. + The string representation of the current instance in Cartesian form, as specified by and . + A standard or custom numeric format string. + An object that supplies culture-specific formatting information. + + is not a valid format string. + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + The hash code for the complex number. + + + The hash code of the complex number. + + + The hash code is calculated as + System.Math.Exp(ComplexMath.Absolute(complexNumber)). + + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as float. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Explicit conversion of a real decimal to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Explicit conversion of a Complex to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Implicit conversion of a real byte to a Complex32. + + The byte value to convert. + The result of the conversion. + + + + Implicit conversion of a real short to a Complex32. + + The short value to convert. + The result of the conversion. + + + + Implicit conversion of a signed byte to a Complex32. + + The signed byte value to convert. + The result of the conversion. + + + + Implicit conversion of a unsigned real short to a Complex32. + + The unsigned short value to convert. + The result of the conversion. + + + + Implicit conversion of a real int to a Complex32. + + The int value to convert. + The result of the conversion. + + + + Implicit conversion of a BigInteger int to a Complex32. + + The BigInteger value to convert. + The result of the conversion. + + + + Implicit conversion of a real long to a Complex32. + + The long value to convert. + The result of the conversion. + + + + Implicit conversion of a real uint to a Complex32. + + The uint value to convert. + The result of the conversion. + + + + Implicit conversion of a real ulong to a Complex32. + + The ulong value to convert. + The result of the conversion. + + + + Implicit conversion of a real float to a Complex32. + + The float value to convert. + The result of the conversion. + + + + Implicit conversion of a real double to a Complex32. + + The double value to convert. + The result of the conversion. + + + + Converts this Complex32 to a . + + A with the same values as this Complex32. + + + + Returns the additive inverse of a specified complex number. + + The result of the real and imaginary components of the value parameter multiplied by -1. + A complex number. + + + + Computes the conjugate of a complex number and returns the result. + + The conjugate of . + A complex number. + + + + Adds two complex numbers and returns the result. + + The sum of and . + The first complex number to add. + The second complex number to add. + + + + Subtracts one complex number from another and returns the result. + + The result of subtracting from . + The value to subtract from (the minuend). + The value to subtract (the subtrahend). + + + + Returns the product of two complex numbers. + + The product of the and parameters. + The first complex number to multiply. + The second complex number to multiply. + + + + Divides one complex number by another and returns the result. + + The quotient of the division. + The complex number to be divided. + The complex number to divide by. + + + + Returns the multiplicative inverse of a complex number. + + The reciprocal of . + A complex number. + + + + Returns the square root of a specified complex number. + + The square root of . + A complex number. + + + + Gets the absolute value (or magnitude) of a complex number. + + The absolute value of . + A complex number. + + + + Returns e raised to the power specified by a complex number. + + The number e raised to the power . + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a complex number. + + The complex number raised to the power . + A complex number to be raised to a power. + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a single-precision floating-point number. + + The complex number raised to the power . + A complex number to be raised to a power. + A single-precision floating-point number that specifies a power. + + + + Returns the natural (base e) logarithm of a specified complex number. + + The natural (base e) logarithm of . + A complex number. + + + + Returns the logarithm of a specified complex number in a specified base. + + The logarithm of in base . + A complex number. + The base of the logarithm. + + + + Returns the base-10 logarithm of a specified complex number. + + The base-10 logarithm of . + A complex number. + + + + Returns the sine of the specified complex number. + + The sine of . + A complex number. + + + + Returns the cosine of the specified complex number. + + The cosine of . + A complex number. + + + + Returns the tangent of the specified complex number. + + The tangent of . + A complex number. + + + + Returns the angle that is the arc sine of the specified complex number. + + The angle which is the arc sine of . + A complex number. + + + + Returns the angle that is the arc cosine of the specified complex number. + + The angle, measured in radians, which is the arc cosine of . + A complex number that represents a cosine. + + + + Returns the angle that is the arc tangent of the specified complex number. + + The angle that is the arc tangent of . + A complex number. + + + + Returns the hyperbolic sine of the specified complex number. + + The hyperbolic sine of . + A complex number. + + + + Returns the hyperbolic cosine of the specified complex number. + + The hyperbolic cosine of . + A complex number. + + + + Returns the hyperbolic tangent of the specified complex number. + + The hyperbolic tangent of . + A complex number. + + + + Extension methods for the Complex type provided by System.Numerics + + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex. + + + + Gets the conjugate of the Complex number. + + The number to perform this operation on. + + The semantic of setting the conjugate is such that + + // a, b of type Complex32 + a.Conjugate = b; + + is equivalent to + + // a, b of type Complex32 + a = b.Conjugate + + + The conjugate of the number. + + + + Returns the multiplicative inverse of a complex number. + + + + + Exponential of this Complex (exp(x), E^x). + + The number to perform this operation on. + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex (Base E). + + The number to perform this operation on. + + The natural logarithm of this complex number. + + + + + Common Logarithm of this Complex (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex with custom base. + + The logarithm of this complex number. + + + + Raise this Complex to the given value. + + The number to perform this operation on. + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex to the inverse of the given value. + + The number to perform this operation on. + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex + + The number to perform this operation on. + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex + + The number to perform this operation on. + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex. + + + + + Evaluate all cubic roots of this Complex. + + + + + Gets a value indicating whether the Complex32 is zero. + + The number to perform this operation on. + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + The number to perform this operation on. + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + The number to perform this operation on. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + The number to perform this operation on. + + true if this instance is NaN; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + The number to perform this operation on. + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + The number to perform this operation on. + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + The number to perform this operation on. + + true if this instance is real nonnegative number; otherwise, false. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + The string to parse. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as double. + + + + + Converts the string representation of a complex number to a double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + A collection of frequently used mathematical constants. + + + + The number e + + + The number log[2](e) + + + The number log[10](e) + + + The number log[e](2) + + + The number log[e](10) + + + The number log[e](pi) + + + The number log[e](2*pi)/2 + + + The number 1/e + + + The number sqrt(e) + + + The number sqrt(2) + + + The number sqrt(3) + + + The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 + + + The number sqrt(3)/2 + + + The number pi + + + The number pi*2 + + + The number pi/2 + + + The number pi*3/2 + + + The number pi/4 + + + The number sqrt(pi) + + + The number sqrt(2pi) + + + The number sqrt(pi/2) + + + The number sqrt(2*pi*e) + + + The number log(sqrt(2*pi)) + + + The number log(sqrt(2*pi*e)) + + + The number log(2 * sqrt(e / pi)) + + + The number 1/pi + + + The number 2/pi + + + The number 1/sqrt(pi) + + + The number 1/sqrt(2pi) + + + The number 2/sqrt(pi) + + + The number 2 * sqrt(e / pi) + + + The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). + + + + + The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). + + + + + The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). + + + The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. + + + The Catalan constant + Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } + + + The Euler-Mascheroni constant + lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } + + + The number (1+sqrt(5))/2, also known as the golden ratio + + + The Glaisher constant + e^(1/12 - Zeta(-1)) + + + The Khinchin constant + prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} + + + + The size of a double in bytes. + + + + + The size of an int in bytes. + + + + + The size of a float in bytes. + + + + + The size of a Complex in bytes. + + + + + The size of a Complex in bytes. + + + + Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) + + + Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) + + + Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) + + + Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) + + + Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) + + + Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) + + + Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) + + + Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) + + + Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) + + + Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) + + + Planck time: t_p = l_p/c_0 [s] (2007 CODATA) + + + Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) + + + Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) + + + Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) + + + Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) + + + Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) + + + Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) + + + Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) + + + Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) + + + Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) + + + Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) + + + Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) + + + Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) + + + Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) + + + Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) + + + Electron Mass: [kg] (2007 CODATA) + + + Electron Mass Energy Equivalent: [J] (2007 CODATA) + + + Electron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Electron Compton Wavelength: [m] (2007 CODATA) + + + Classical Electron Radius: [m] (2007 CODATA) + + + Thomson Cross Section: [m^2] (2002 CODATA) + + + Electron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Electon G-Factor: [1] (2007 CODATA) + + + Muon Mass: [kg] (2007 CODATA) + + + Muon Mass Energy Equivalent: [J] (2007 CODATA) + + + Muon Molar Mass: [kg mol^-1] (2007 CODATA) + + + Muon Compton Wavelength: [m] (2007 CODATA) + + + Muon Magnetic Moment: [J T^-1] (2007 CODATA) + + + Muon G-Factor: [1] (2007 CODATA) + + + Tau Mass: [kg] (2007 CODATA) + + + Tau Mass Energy Equivalent: [J] (2007 CODATA) + + + Tau Molar Mass: [kg mol^-1] (2007 CODATA) + + + Tau Compton Wavelength: [m] (2007 CODATA) + + + Proton Mass: [kg] (2007 CODATA) + + + Proton Mass Energy Equivalent: [J] (2007 CODATA) + + + Proton Molar Mass: [kg mol^-1] (2007 CODATA) + + + Proton Compton Wavelength: [m] (2007 CODATA) + + + Proton Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton G-Factor: [1] (2007 CODATA) + + + Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Neutron Mass: [kg] (2007 CODATA) + + + Neutron Mass Energy Equivalent: [J] (2007 CODATA) + + + Neutron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Neuron Compton Wavelength: [m] (2007 CODATA) + + + Neutron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Neutron G-Factor: [1] (2007 CODATA) + + + Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Deuteron Mass: [kg] (2007 CODATA) + + + Deuteron Mass Energy Equivalent: [J] (2007 CODATA) + + + Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Helion Mass: [kg] (2007 CODATA) + + + Helion Mass Energy Equivalent: [J] (2007 CODATA) + + + Helion Molar Mass: [kg mol^-1] (2007 CODATA) + + + Avogadro constant: [mol^-1] (2010 CODATA) + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 + + + The SI prefix factor corresponding to 1 000 + + + The SI prefix factor corresponding to 100 + + + The SI prefix factor corresponding to 10 + + + The SI prefix factor corresponding to 0.1 + + + The SI prefix factor corresponding to 0.01 + + + The SI prefix factor corresponding to 0.001 + + + The SI prefix factor corresponding to 0.000 001 + + + The SI prefix factor corresponding to 0.000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 + + + + Sets parameters for the library. + + + + + Use a specific provider if configured, e.g. using + environment variables, or fall back to the best providers. + + + + + Use the best provider available. + + + + + Gets or sets a value indicating whether the distribution classes check validate each parameter. + For the multivariate distributions this could involve an expensive matrix factorization. + The default setting of this property is true. + + + + + Gets or sets a value indicating whether to use thread safe random number generators (RNG). + Thread safe RNG about two and half time slower than non-thread safe RNG. + + + true to use thread safe random number generators ; otherwise, false. + + + + + Optional path to try to load native provider binaries from. + + + + + Gets or sets a value indicating how many parallel worker threads shall be used + when parallelization is applicable. + + Default to the number of processor cores, must be between 1 and 1024 (inclusive). + + + + Gets or sets the TaskScheduler used to schedule the worker tasks. + + + + + Gets or sets the order of the matrix when linear algebra provider + must calculate multiply in parallel threads. + + The order. Default 64, must be at least 3. + + + + Gets or sets the number of elements a vector or matrix + must contain before we multiply threads. + + Number of elements. Default 300, must be at least 3. + + + + Numerical Derivative. + + + + + Initialized a NumericalDerivative with the given points and center. + + + + + Initialized a NumericalDerivative with the default points and center for the given order. + + + + + Evaluates the derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + Derivative order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Univariate function handle. + Derivative order. + + + + Evaluates the first derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the first derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the second derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the second derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + + + + Evaluates the partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + + + + Class to calculate finite difference coefficients using Taylor series expansion method. + + + For n points, coefficients are calculated up to the maximum derivative order possible (n-1). + The current function value position specifies the "center" for surrounding coefficients. + Selecting the first, middle or last positions represent forward, backwards and central difference methods. + + + + + + + Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. + + + + + Initializes a new instance of the class. + + Number of finite difference coefficients. + + + + Gets the finite difference coefficients for a specified center and order. + + Current function position with respect to coefficients. Must be within point range. + Order of finite difference coefficients. + Vector of finite difference coefficients. + + + + Gets the finite difference coefficients for all orders at a specified center. + + Current function position with respect to coefficients. Must be within point range. + Rectangular array of coefficients, with columns specifying order. + + + + Type of finite different step size. + + + + + The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. + + + + + A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however + this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the + function input parameter and not the order of the finite difference derivative. + + + + + A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order + and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a + base step size, h, that is equivalent to scaling. This step size is then scaled according to the function + input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). + + + + + Class to evaluate the numerical derivative of a function using finite difference approximations. + Variable point and center methods can be initialized . + This class can also be used to return function handles (delegates) for a fixed derivative order and variable. + It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. + + + + + Initializes a NumericalDerivative class with the default 3 point center difference method. + + + + + Initialized a NumericalDerivative class. + + Number of points for finite difference derivatives. + Location of the center with respect to other points. Value ranges from zero to points-1. + + + + Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. + If the base step size used in scaling is desired, see . + + + Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a + base-2 representable number to improve finite difference accuracy. + + + + + Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. + However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. + + + + + Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. + By default this is set to machine epsilon, from which is computed. + + + + + Sets and gets the location of the center point for the finite difference derivative. + + + + + Number of times a function is evaluated for numerical derivatives. + + + + + Type of step size for computing finite differences. If set to absolute, dx = h. + If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when + h is approximately equal to the square-root of machine accuracy, epsilon. + + + + + Evaluates the derivative of equidistant points using the finite difference method. + + Vector of points StepSize apart. + Derivative order. + Finite difference step size. + Derivative of points of the specified order. + + + + Evaluates the derivative of a scalar univariate function. + + + Supplying the optional argument currentValue will reduce the number of function evaluations + required to calculate the finite difference derivative. + + Function handle. + Point at which to compute the derivative. + Derivative order. + Current function value at center. + Function derivative at x of the specified order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Input function handle. + Derivative order. + Function handle that evaluates the derivative of input function at a fixed order. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Function partial derivative at x of the specified order. + + + + Evaluates the partial derivatives of a multivariate function array. + + + This function assumes the input vector x is of the correct length for f. + + Multivariate vector function array handle. + Vector at which to evaluate the derivatives. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Vector of functions partial derivatives at x of the specified order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at a fixed order. + + + + Creates a function handle for the partial derivative of a vector multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at fixed order. + + + + Evaluates the mixed partial derivative of variable order for multivariate functions. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function handle. + Points at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivative at x of the specified order. + + + + Evaluates the mixed partial derivative of variable order for multivariate function arrays. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function array handle. + Vector at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivatives at x of the specified order. + + + + Creates a function handle for the mixed partial derivative of a multivariate function. + + Input function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Creates a function handle for the mixed partial derivative of a multivariate vector function. + + Input vector function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Resets the evaluation counter. + + + + + Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Hessian object with a three point central difference method. + + + + + Creates a numerical Hessian with a specified differentiation scheme. + + Number of points for Hessian evaluation. + Center point for differentiation. + + + + Evaluates the Hessian of the scalar univariate function f at points x. + + Scalar univariate function handle. + Point at which to evaluate Hessian. + Hessian tensor. + + + + Evaluates the Hessian of a multivariate function f at points x. + + + This method of computing the Hessian is only valid for Lipschitz continuous functions. + The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. + + Multivariate function handle.> + Points at which to evaluate Hessian.> + Hessian tensor. + + + + Resets the function evaluation counter for the Hessian. + + + + + Class for evaluating the Jacobian of a function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Jacobian object with a three point central difference method. + + + + + Creates a numerical Jacobian with a specified differentiation scheme. + + Number of points for Jacobian evaluation. + Center point for differentiation. + + + + Evaluates the Jacobian of scalar univariate function f at point x. + + Scalar univariate function handle. + Point at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x. + + + This function assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x given a current function value. + + + To minimize the number of function evaluations, a user can supply the current value of the function + to be used in computing the Jacobian. This value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Current function value at finite difference center. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function array f at vector x. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Jacobian matrix. + + + + Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. + + + To minimize the number of function evaluations, a user can supply a vector of current values of the functions + to be used in computing the Jacobian. These value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Vector of current function values. + Jacobian matrix. + + + + Resets the function evaluation counter for the Jacobian. + + + + + Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Double-Exponential integration. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The number of Gauss-Legendre points. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Gauss-Kronrod integration. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the differintegral of order n at x. + + + + Metrics to measure the distance between two structures. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Pearson's distance, i.e. 1 - the person correlation coefficient. + + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Discrete Univariate Bernoulli distribution. + The Bernoulli distribution is a distribution over bits. The parameter + p specifies the probability that a 1 is generated. + Wikipedia - Bernoulli distribution. + + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + If the Bernoulli parameter is not in the range [0,1]. + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + If the Bernoulli parameter is not in the range [0,1]. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Generates one sample from the Bernoulli distribution. + + The random source to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A random sample from the Bernoulli distribution. + + + + Samples a Bernoulli distributed random variable. + + A sample from the Bernoulli distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Continuous Univariate Beta distribution. + For details about this distribution, see + Wikipedia - Beta distribution. + + + There are a few special cases for the parameterization of the Beta distribution. When both + shape parameters are positive infinity, the Beta distribution degenerates to a point distribution + at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point + distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution + degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the + distribution degenerates to a point distribution at the non-zero shape parameter. + + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + A string representation of the Beta distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. + + + + + Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Beta distribution. + + + + + Gets the variance of the Beta distribution. + + + + + Gets the standard deviation of the Beta distribution. + + + + + Gets the entropy of the Beta distribution. + + + + + Gets the skewness of the Beta distribution. + + + + + Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the Beta distribution. + + + + + Gets the minimum of the Beta distribution. + + + + + Gets the maximum of the Beta distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Beta distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Beta distribution. + + a sequence of samples from the distribution. + + + + Samples Beta distributed random variables by sampling two Gamma variables and normalizing. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a random number from the Beta distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Beta-Binomial distribution. + The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising + when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. + The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. + It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. + Wikipedia - Beta-Binomial distribution. + + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a random number from the BetaBinomial distribution. + + + + Samples a BetaBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of BetaBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a BetaBinomial distributed random variable. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Samples an array of BetaBinomial distributed random variables. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + + + + Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast + is used to construct an underlying beta distribution. + + The minimum value. + The maximum value. + The most likely value (mode). + The random number generator which is used to draw random samples. + The Beta distribution derived from the PERT parameters. + + + + A string representation of the distribution. + + A string representation of the BetaScaled distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. + + + + + Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. + + + + + Gets the location (μ) of the BetaScaled distribution. + + + + + Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the BetaScaled distribution. + + + + + Gets the variance of the BetaScaled distribution. + + + + + Gets the standard deviation of the BetaScaled distribution. + + + + + Gets the entropy of the BetaScaled distribution. + + + + + Gets the skewness of the BetaScaled distribution. + + + + + Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the BetaScaled distribution. + + + + + Gets the minimum of the BetaScaled distribution. + + + + + Gets the maximum of the BetaScaled distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Binomial distribution. + For details about this distribution, see + Wikipedia - Binomial distribution. + + + The distribution is parameterized by a probability (between 0.0 and 1.0). + + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + If is not in the interval [0.0,1.0]. + If is negative. + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The random number generator which is used to draw random samples. + If is not in the interval [0.0,1.0]. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + + + + Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. + + + + + Gets the number of trials. Range: n ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the Binomial distribution without doing parameter checking. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successful trials. + + + + Samples a Binomially distributed random variable. + + The number of successes in N trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Binomially distributed random variables. + + a sequence of successes in N trials. + + + + Samples a binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Samples a binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Gets the scale (a) of the distribution. Range: a > 0. + + + + + Gets the first shape parameter (c) of the distribution. Range: c > 0. + + + + + Gets the second shape parameter (k) of the distribution. Range: k > 0. + + + + + Initializes a new instance of the Burr Type XII class. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Burr distribution. + + + + + Gets the variance of the Burr distribution. + + + + + Gets the standard deviation of the Burr distribution. + + + + + Gets the mode of the Burr distribution. + + + + + Gets the minimum of the Burr distribution. + + + + + Gets the maximum of the Burr distribution. + + + + + Gets the entropy of the Burr distribution (currently not supported). + + + + + Gets the skewness of the Burr distribution. + + + + + Gets the median of the Burr distribution. + + + + + Generates a sample from the Burr distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the Burr distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Discrete Univariate Categorical distribution. + For details about this distribution, see + Wikipedia - Categorical distribution. This + distribution is sometimes called the Discrete distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + Support: 0..k where k = length(probability mass array)-1 + + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class from a . The distribution + will not be automatically updated when the histogram changes. The categorical distribution will have + one value for each bucket and a probability for that value proportional to the bucket count. + + The histogram from which to create the categorical variable. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Gets the probability mass vector (non-negative ratios) of the multinomial. + + Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a . + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets he mode of the distribution. + + Throws a . + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the cumulative distribution function. This method performs no parameter checking. + If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + An array representing the unnormalized cumulative distribution function. + + + + Returns one trials from the categorical distribution. + + The random number generator to use. + The (unnormalized) cumulative distribution of the probability distribution. + One sample from the categorical distribution implied by . + + + + Samples a Binomially distributed random variable. + + The number of successful trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of successful trial counts. + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Continuous Univariate Cauchy distribution. + The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see + Wikipedia - Cauchy distribution. + + + + + Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 + + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Gets the location (x0) of the distribution. + + + + + Gets the scale (γ) of the distribution. Range: γ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi distribution. + This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal + components are independent and each follow a standard normal distribution. The length of the vector will + then have a chi distribution. + Wikipedia - Chi distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Chi distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Chi distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi-Squared distribution. + This distribution is a sum of the squares of k independent standard normal random variables. + Wikipedia - ChiSquare distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ChiSquare distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ChiSquare distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + Generates a sample from the ChiSquare distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sample from the ChiSquare distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Continuous Univariate Uniform distribution. + The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see + Wikipedia - Continuous uniform distribution. + + + + + Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. + + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + If the upper bound is smaller than the lower bound. + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + If the upper bound is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Generates a sample from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Discrete Univariate Conway-Maxwell-Poisson distribution. + The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli + distributions. It is parameterized by two real numbers "lambda" and "nu". For + + nu = 0 the distribution reverts to a Geometric distribution + nu = 1 the distribution reverts to the Poisson distribution + nu -> infinity the distribution converges to a Bernoulli distribution + + This implementation will cache the value of the normalization constant. + Wikipedia - ConwayMaxwellPoisson distribution. + + + + + The mean of the distribution. + + + + + The variance of the distribution. + + + + + Caches the value of the normalization constant. + + + + + Since many properties of the distribution can only be computed approximately, the tolerance + level specifies how much error we accept. + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Gets the lambda (λ) parameter. Range: λ > 0. + + + + + Gets the rate of decay (ν) parameter. Range: ν ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the cumulative distribution at location . + + + + + Gets the normalization constant of the Conway-Maxwell-Poisson distribution. + + + + + Computes an approximate normalization constant for the CMP distribution. + + The lambda (λ) parameter for the CMP distribution. + The rate of decay (ν) parameter for the CMP distribution. + + an approximate normalization constant for the CMP distribution. + + + + + Returns one trials from the distribution. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The z parameter. + + One sample from the distribution implied by , , and . + + + + + Samples a Conway-Maxwell-Poisson distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. + + + a sequence of samples from a Conway-Maxwell-Poisson distribution. + + + + + Samples a random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Multivariate Dirichlet distribution. For details about this distribution, see + Wikipedia - Dirichlet distribution. + + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + No parameter can be less than zero and at least one parameter should be larger than zero. + + The parameters of the Dirichlet distribution. + + + + Gets or sets the parameters of the Dirichlet distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the dimension of the Dirichlet distribution. + + + + + Gets the sum of the Dirichlet parameters. + + + + + Gets the mean of the Dirichlet distribution. + + + + + Gets the variance of the Dirichlet distribution. + + + + + Gets the entropy of the distribution. + + + + + Computes the density of the distribution. + + The locations at which to compute the density. + the density at . + The Dirichlet distribution requires that the sum of the components of x equals 1. + You can also leave out the last component, and it will be computed from the others. + + + + Computes the log density of the distribution. + + The locations at which to compute the density. + the density at . + + + + Samples a Dirichlet distributed random vector. + + A sample from this distribution. + + + + Samples a Dirichlet distributed random vector. + + The random number generator to use. + The Dirichlet distribution parameter. + a sample from the distribution. + + + + Discrete Univariate Uniform distribution. + The discrete uniform distribution is a distribution over integers. The distribution + is parameterized by a lower and upper bound (both inclusive). + Wikipedia - Discrete uniform distribution. + + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Gets the inclusive lower bound of the probability distribution. + + + + + Gets the inclusive upper bound of the probability distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. + + The random source to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A random sample from the discrete uniform distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of uniformly distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a uniformly distributed random variable. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Samples a uniformly distributed random variable. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Continuous Univariate Erlang distribution. + This distribution is a continuous probability distribution with wide applicability primarily due to its + relation to the exponential and Gamma distributions. + Wikipedia - Erlang distribution. + + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Erlang distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The scale (μ) of the Erlang distribution. Range: μ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. + + + + + Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + + Gets the scale of the Erlang distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum value. + + + + + Gets the Maximum value. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Erlang distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Erlang distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Exponential distribution. + The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. + Wikipedia - exponential distribution. + + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Exponential distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Draws a random sample from the distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. + For details about this distribution, see + Wikipedia - FisherSnedecor distribution. + + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. + + + + + Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the FisherSnedecor distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the FisherSnedecor distribution. + + a sequence of samples from the distribution. + + + + Generates one sample from the FisherSnedecor distribution without parameter checking. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a FisherSnedecor distributed random number. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Gamma distribution. + For details about this distribution, see + Wikipedia - Gamma distribution. + + + The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want + to specify a Gamma distribution which is a point distribution we set the shape parameter to be the + location of the point distribution and the inverse scale as positive infinity. The distribution + with shape and inverse scale both zero is undefined. + + Random number generation for the Gamma distribution is based on the algorithm in: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Gamma distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Gamma distribution. Range: k ≥ 0. + The scale (θ) of the Gamma distribution. Range: θ ≥ 0 + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. + + + + + Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + + Gets or sets the scale (θ) of the Gamma distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Gamma distribution. + + + + + Gets the variance of the Gamma distribution. + + + + + Gets the standard deviation of the Gamma distribution. + + + + + Gets the entropy of the Gamma distribution. + + + + + Gets the skewness of the Gamma distribution. + + + + + Gets the mode of the Gamma distribution. + + + + + Gets the median of the Gamma distribution. + + + + + Gets the minimum of the Gamma distribution. + + + + + Gets the maximum of the Gamma distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Gamma distribution. + + a sequence of samples from the distribution. + + + + Sampling implementation based on: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + This method performs no parameter checks. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + A sample from a Gamma distributed random variable. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Geometric distribution. + The Geometric distribution is a distribution over positive integers parameterized by one positive real number. + This implementation of the Geometric distribution will never generate 0's. + Wikipedia - geometric distribution. + + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a not supported exception. + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Returns one sample from the distribution. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + One sample from the distribution implied by . + + + + Samples a Geometric distributed random variable. + + A sample from the Geometric distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Geometric distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Discrete Univariate Hypergeometric distribution. + This distribution is a discrete probability distribution that describes the number of successes in a sequence + of n draws from a finite population without replacement, just as the binomial distribution + describes the number of successes for draws with replacement + Wikipedia - Hypergeometric distribution. + + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the size of the population (N). + + + + + Gets the number of draws without replacement (n). + + + + + Gets the number successes within the population (K, M). + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the cumulative distribution at location . + + + + + Generates a sample from the Hypergeometric distribution without doing parameter checking. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The n parameter of the distribution. + a random number from the Hypergeometric distribution. + + + + Samples a Hypergeometric distributed random variable. + + The number of successes in n trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Hypergeometric distributed random variables. + + a sequence of successes in n trials. + + + + Samples a random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Continuous Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by a double. + + + + + Gets the largest element in the domain of the distribution which can be represented by a double. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Discrete Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by an integer. + + + + + Gets the largest element in the domain of the distribution which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Probability Distribution. + + + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Continuous Univariate Inverse Gamma distribution. + The inverse Gamma distribution is a distribution over the positive real numbers parameterized by + two positive parameters. + Wikipedia - InverseGamma distribution. + + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Gets or sets the shape (α) parameter. Range: α > 0. + + + + + Gets or sets The scale (β) parameter. Range: β > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Throws . + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Gets the mean (μ) of the distribution. Range: μ > 0. + + + + + Gets the shape (λ) of the distribution. Range: λ > 0. + + + + + Initializes a new instance of the InverseGaussian class. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Inverse Gaussian distribution. + + + + + Gets the variance of the Inverse Gaussian distribution. + + + + + Gets the standard deviation of the Inverse Gaussian distribution. + + + + + Gets the median of the Inverse Gaussian distribution. + No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. + + + + + Gets the minimum of the Inverse Gaussian distribution. + + + + + Gets the maximum of the Inverse Gaussian distribution. + + + + + Gets the skewness of the Inverse Gaussian distribution. + + + + + Gets the kurtosis of the Inverse Gaussian distribution. + + + + + Gets the mode of the Inverse Gaussian distribution. + + + + + Gets the entropy of the Inverse Gaussian distribution (currently not supported). + + + + + Generates a sample from the inverse Gaussian distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the inverse Gaussian distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the inverse Gaussian distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + An Inverse Gaussian distribution. + + + + Multivariate Inverse Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution + is the conjugate prior for the covariance matrix of a multivariate normal distribution. + Wikipedia - Inverse-Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. + + + + + Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. + + + + Gets the variance of the distribution. + + The variance of the distribution. + Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. + + + + Evaluates the probability density function for the inverse Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + a sample from the distribution. + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + The random number generator to use. + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + a sample from the distribution. + + + + Univariate Probability Distribution. + + + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Continuous Univariate Laplace distribution. + The Laplace distribution is a distribution over the real numbers parameterized by a mean and + scale parameter. The PDF is: + p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. + Wikipedia - Laplace distribution. + + + + + Initializes a new instance of the class (location = 0, scale = 1). + + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + If is negative. + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + + + + Gets the location (μ) of the Laplace distribution. + + + + + Gets the scale (b) of the Laplace distribution. Range: b > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples a Laplace distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sample from the Laplace distribution. + + a sample from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Log-Normal distribution. + For details about this distribution, see + Wikipedia - Log-Normal distribution. + + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the logarithm of the distribution. + The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a log-normal distribution with the desired mu and sigma parameters. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Constructs a log-normal distribution with the desired mean and variance. + + The mean of the log-normal distribution. + The variance of the log-normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Estimates the log-normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + MATLAB: lognfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + + + + Gets the log-scale (μ) (mean of the logarithm) of the distribution. + + + + + Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mu of the log-normal distribution. + + + + + Gets the variance of the log-normal distribution. + + + + + Gets the standard deviation of the log-normal distribution. + + + + + Gets the entropy of the log-normal distribution. + + + + + Gets the skewness of the log-normal distribution. + + + + + Gets the mode of the log-normal distribution. + + + + + Gets the median of the log-normal distribution. + + + + + Gets the minimum of the log-normal distribution. + + + + + Gets the maximum of the log-normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the density at . + + MATLAB: lognpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: logncdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: logninv + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Multivariate Matrix-valued Normal distributions. The distribution + is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix + for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. + Wikipedia - MatrixNormal distribution. + + + + + The mean of the matrix normal distribution. + + + + + The covariance matrix for the rows. + + + + + The covariance matrix for the columns. + + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + The random number generator which is used to draw random samples. + If the dimensions of the mean and two covariance matrices don't match. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + + + + Gets the mean. (M) + + The mean of the distribution. + + + + Gets the row covariance. (V) + + The row covariance. + + + + Gets the column covariance. (K) + + The column covariance. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Evaluates the probability density function for the matrix normal distribution. + + The matrix at which to evaluate the density at. + the density at + If the argument does not have the correct dimensions. + + + + Samples a matrix normal distributed random variable. + + A random number from this distribution. + + + + Samples a matrix normal distributed random variable. + + The random number generator to use. + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + a sequence of samples from the distribution. + + + + Samples a vector normal distributed random variable. + + The random number generator to use. + The mean of the vector normal distribution. + The covariance matrix of the vector normal distribution. + a sequence of samples from defined distribution. + + + + Multivariate Multinomial distribution. For details about this distribution, see + Wikipedia - Multinomial distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + + + Stores the normalized multinomial probabilities. + + + + + The number of trials. + + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class from histogram . The distribution will + not be automatically updated when the histogram changes. + + Histogram instance + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative returns false, + if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. + + + + Gets the proportion of ratios. + + + + + Gets the number of trials. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Computes values of the probability mass function. + + Non-negative integers x1, ..., xk + The probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Computes values of the log probability mass function. + + Non-negative integers x1, ..., xk + The log probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Samples one multinomial distributed random variable. + + the counts for each of the different possible values. + + + + Samples a sequence multinomially distributed random variables. + + a sequence of counts for each of the different possible values. + + + + Samples one multinomial distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + the counts for each of the different possible values. + + + + Samples a multinomially distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of variables needed. + a sequence of counts for each of the different possible values. + + + + Discrete Univariate Negative Binomial distribution. + The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special + case that r is an integer one can interpret the distribution as the number of failures before the r'th success + when the probability of success is p. + Wikipedia - NegativeBinomial distribution. + + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Gets the number of successes. Range: r ≥ 0. + + + + + Gets the probability of success. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Samples a negative binomial distributed random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + a sample from the distribution. + + + + Samples a NegativeBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of NegativeBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Continuous Univariate Normal distribution, also known as Gaussian distribution. + For details about this distribution, see + Wikipedia - Normal distribution. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a normal distribution from a mean and standard deviation. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + a normal distribution. + + + + Constructs a normal distribution from a mean and variance. + + The mean (μ) of the normal distribution. + The variance (σ^2) of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Constructs a normal distribution from a mean and precision. + + The mean (μ) of the normal distribution. + The precision of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Estimates the normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + MATLAB: normfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Gets the mean (μ) of the normal distribution. + + + + + Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + + Gets the variance of the normal distribution. + + + + + Gets the precision of the normal distribution. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the entropy of the normal distribution. + + + + + Gets the skewness of the normal distribution. + + + + + Gets the mode of the normal distribution. + + + + + Gets the median of the normal distribution. + + + + + Gets the minimum of the normal distribution. + + + + + Gets the maximum of the normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the density at . + + MATLAB: normpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: normcdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: norminv + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + This structure represents the type over which the distribution + is defined. + + + + + Initializes a new instance of the struct. + + The mean of the pair. + The precision of the pair. + + + + Gets or sets the mean of the pair. + + + + + Gets or sets the precision of the pair. + + + + + Multivariate Normal-Gamma Distribution. + The distribution is the conjugate prior distribution for the + distribution. It specifies a prior over the mean and precision of the distribution. + It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the + precision inverse scale. + The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). + The following degenerate cases are special: when the precision is known, + the precision shape will encode the value of the precision while the precision inverse scale is positive + infinity. When the mean is known, the mean location will encode the value of the mean while the scale + will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. + Wikipedia - Normal-Gamma distribution. + + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Gets the location of the mean. + + + + + Gets the scale of the mean. + + + + + Gets the shape of the precision. + + + + + Gets the inverse scale of the precision. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Returns the marginal distribution for the mean of the NormalGamma distribution. + + the marginal distribution for the mean of the NormalGamma distribution. + + + + Returns the marginal distribution for the precision of the distribution. + + The marginal distribution for the precision of the distribution/ + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the variance of the distribution. + + The mean of the distribution. + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + Density value + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + Density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + The log of the density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + The log of the density value + + + + Generates a sample from the NormalGamma distribution. + + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + a sequence of samples from the distribution. + + + + Generates a sample from the NormalGamma distribution. + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sequence of samples from the distribution. + + + + Continuous Univariate Pareto distribution. + The Pareto distribution is a power law probability distribution that coincides with social, + scientific, geophysical, actuarial, and many other types of observable phenomena. + For details about this distribution, see + Wikipedia - Pareto distribution. + + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + If or are negative. + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The random number generator which is used to draw random samples. + If or are negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Pareto distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Poisson distribution. + + + Distribution is described at Wikipedia - Poisson distribution. + Knuth's method is used to generate Poisson distributed random variables. + f(x) = exp(-λ)*λ^x/x!; + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + If is equal or less then 0.0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + If is equal or less then 0.0. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + + + + Gets the Poisson distribution parameter λ. Range: λ > 0. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Generates one sample from the Poisson distribution. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by Knuth's method. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by "Rejection method PA". + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, + Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) + The article is on pages 29-35. The algorithm given here is on page 32. + + + + Samples a Poisson distributed random variable. + + A sample from the Poisson distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Poisson distributed random variables. + + a sequence of successes in N trials. + + + + Samples a Poisson distributed random variable. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Samples a Poisson distributed random variable. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Rayleigh distribution. + The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an + example of how it arises, the wind speed will have a Rayleigh distribution if the components of + the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. + For details about this distribution, see + Wikipedia - Rayleigh distribution. + + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + If is negative. + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the scale (σ) of the distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Rayleigh distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (σ) of the distribution. Range: σ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (σ) of the distribution. Range: σ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized Error Distribution (SGED). + Implements the univariate SSkewed Generalized Error Distribution. For details about this + distribution, see + + Wikipedia - Generalized Error Distribution. + It includes Laplace, Normal and Student-t distributions. + This is the distribution with q=Inf. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution + with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Generates a sample from the Skew Generalized Error distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized Error distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized T-distribution. + Implements the univariate Skewed Generalized t-distribution. For details about this + distribution, see + + Wikipedia - Skewed generalized t-distribution. + The skewed generalized t-distribution contains many different distributions within it + as special cases based on the parameterization chosen. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution + with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Given a parameter set, returns the distribution that matches this parameterization. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + Null if no known distribution matches the parameterization, else the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the inverse cumulative density at . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Skew Generalized t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Stable distribution. + A random variable is said to be stable (or to have a stable distribution) if it has + the property that a linear combination of two independent copies of the variable has + the same distribution, up to location and scale parameters. + For details about this distribution, see + Wikipedia - Stable distribution. + + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. + + + + + Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + + + + + Gets the scale (c) of the distribution. Range: c > 0. + + + + + Gets the location (μ) of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets he entropy of the distribution. + + Always throws a not supported exception. + + + + Gets the skewness of the distribution. + + Throws a not supported exception of Alpha != 2. + + + + Gets the mode of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the median of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) + + + + Samples the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a random number from the distribution. + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Stable distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Continuous Univariate Student's T-distribution. + Implements the univariate Student t-distribution. For details about this + distribution, see + + Wikipedia - Student's t-distribution. + + We use a slightly generalized version (compared to + Wikipedia) of the Student t-distribution. Namely, one which also + parameterizes the location and scale. See the book "Bayesian Data + Analysis" by Gelman et al. for more details. + The density of the Student t-distribution p(x|mu,scale,dof) = + Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / + (Gamma(dof/2)*Sqrt(dof*pi*scale)). + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. This might involve heavy + computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 + scale 1.0 and degrees of freedom 1. + + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Gets the location (μ) of the Student t-distribution. + + + + + Gets the scale (σ) of the Student t-distribution. Range: σ > 0. + + + + + Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Student t-distribution. + + + + + Gets the variance of the Student t-distribution. + + + + + Gets the standard deviation of the Student t-distribution. + + + + + Gets the entropy of the Student t-distribution. + + + + + Gets the skewness of the Student t-distribution. + + + + + Gets the mode of the Student t-distribution. + + + + + Gets the median of the Student t-distribution. + + + + + Gets the minimum of the Student t-distribution. + + + + + Gets the maximum of the Student t-distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Samples student-t distributed random variables. + + The algorithm is method 2 in section 5, chapter 9 + in L. Devroye's "Non-Uniform Random Variate Generation" + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a random number from the standard student-t distribution. + + + + Generates a sample from the Student t-distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Student t-distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Student t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Student t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Triangular distribution. + For details, see Wikipedia - Triangular distribution. + + The distribution will use the by default. + Users can get/set the random number generator by using the property. + The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The random number generator which is used to draw random samples. + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets or sets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Triangular distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Generates a sample from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Initializes a new instance of the TruncatedPareto class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The random number generator which is used to draw random samples. + If or are non-positive or if T ≤ xm. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets the truncation (T) of the distribution. Range: T > 0. + + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Gets the mean of the truncated Pareto distribution. + + + + + Gets the variance of the truncated Pareto distribution. + + + + + Gets the standard deviation of the truncated Pareto distribution. + + + + + Gets the mode of the truncated Pareto distribution (not supported). + + + + + Gets the minimum of the truncated Pareto distribution. + + + + + Gets the maximum of the truncated Pareto distribution. + + + + + Gets the entropy of the truncated Pareto distribution (not supported). + + + + + Gets the skewness of the truncated Pareto distribution. + + + + + Gets the median of the truncated Pareto distribution. + + + + + Generates a sample from the truncated Pareto distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Continuous Univariate Weibull distribution. + For details about this distribution, see + Wikipedia - Weibull distribution. + + + The Weibull distribution is parametrized by a shape and scale parameter. + + + + + Reusable intermediate result 1 / (_scale ^ _shape) + + + By caching this parameter we can get slightly better numerics precision + in certain constellations without any additional computations. + + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Gets the shape (k) of the Weibull distribution. Range: k > 0. + + + + + Gets the scale (λ) of the Weibull distribution. Range: λ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Weibull distribution. + + + + + Gets the variance of the Weibull distribution. + + + + + Gets the standard deviation of the Weibull distribution. + + + + + Gets the entropy of the Weibull distribution. + + + + + Gets the skewness of the Weibull distribution. + + + + + Gets the mode of the Weibull distribution. + + + + + Gets the median of the Weibull distribution. + + + + + Gets the minimum of the Weibull distribution. + + + + + Gets the maximum of the Weibull distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Generates a sample from the Weibull distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Weibull distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos + + + + Returns a Weibull distribution. + + + + Generates a sample from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Multivariate Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution + is the conjugate prior for the precision (inverse covariance) matrix of the multivariate + normal distribution. + Wikipedia - Wishart distribution. + + + + + The degrees of freedom for the Wishart distribution. + + + + + The scale matrix for the Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The random number generator which is used to draw random samples. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Gets or sets the degrees of freedom (n) for the Wishart distribution. + + + + + Gets or sets the scale matrix (V) for the Wishart distribution. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + + + + Gets the variance of the distribution. + + The variance of the distribution. + + + + Evaluates the probability density function for the Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + A random number from this distribution. + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The cholesky decomposition to use. + a random number from the distribution. + + + + Discrete Univariate Zipf distribution. + Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact + that many types of data studied in the physical and social sciences can be approximated with + a Zipfian distribution, one of a family of related discrete power law probability distributions. + For details about this distribution, see + Wikipedia - Zipf distribution. + + + + + The s parameter of the distribution. + + + + + The n parameter of the distribution. + + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Gets or sets the s parameter of the distribution. + + + + + Gets or sets the n parameter of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The s parameter of the distribution. + The n parameter of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the Zipf distribution without doing parameter checking. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + a random number from the Zipf distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of zipf distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Integer number theory functions. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Find out whether the provided 32 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 64 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 32 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 64 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 32 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 64 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Raises 2 to the provided integer exponent (0 <= exponent < 31). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Raises 2 to the provided integer exponent (0 <= exponent < 63). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Evaluate the binary logarithm of an integer number. + + Two-step method using a De Bruijn-like sequence table lookup. + + + + Find the closest perfect power of two that is larger or equal to the provided + 32 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Find the closest perfect power of two that is larger or equal to the provided + 64 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's + algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the greatest common divisor (gcd) of two big integers. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two big integers. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Collection of functions equivalent to those provided by Microsoft Excel + but backed instead by Math.NET Numerics. + We do not recommend to use them except in an intermediate phase when + porting over solutions previously implemented in Excel. + + + + + An algorithm failed to converge. + + + + + An algorithm failed to converge due to a numerical breakdown. + + + + + An error occurred calling native provider function. + + + + + An error occurred calling native provider function. + + + + + Native provider was unable to allocate sufficient memory. + + + + + Native provider failed LU inversion do to a singular U matrix. + + + + + Compound Monthly Return or Geometric Return or Annualized Return + + + + + Average Gain or Gain Mean + This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) + and then dividing the total by the number of gain periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Average Loss or LossMean + This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) + and then dividing the total by the number of loss periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain + and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. + © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. + + + + + Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then + measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. + + http://www.offshore-library.com/kb/statistics.php + + + + This measure is similar to the loss standard deviation except the downside deviation + considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. + For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below + 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for + the losing periods, and then measure the variation between each losing return and the losing return average). + + + + + A measure of volatility in returns below the mean. It's similar to standard deviation, but it only + looks at periods where the investment return was less than average return. + + + + + Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing + period. Periods can be monthly or quarterly depending on the data frequency. + + + + + Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + The missing gradient is evaluated numerically (forward difference). + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + + Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" + The roots of the polynomial + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The polynomial. + The roots of the polynomial + + + + Find all roots of the Chebychev polynomial of the first kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) + + + + Find all roots of the Chebychev polynomial of the second kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) + + + + Least-Squares Curve Fitting Routines + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as [a, b] array, + where a is the intercept and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning its best fitting parameters as (a, r) tuple. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning a function y' for the best fitting polynomial. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning a function y' for the best fitting combination. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning its best fitting parameter p. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning its best fitting parameter p0 and p1. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning its best fitting parameter p0, p1 and p2. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning a function y' for the best fitting curve. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate a linearly spaced sample vector of the given length between the specified values (inclusive). + Equivalent to MATLAB linspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). + + + + + Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). + Equivalent to MATLAB logspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + + + + + Create a periodic wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic wave. + + The number of samples to generate. + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a Sine wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite Sine wave sequence. + + Samples per unit. + Frequency in samples per unit. + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic square wave, starting with the high phase. + + The number of samples to generate. + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create an infinite periodic square wave sequence, starting with the high phase. + + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create a periodic triangle wave, starting with the raise phase from the lowest sample. + + The number of samples to generate. + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. + + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create a periodic sawtooth wave, starting with the lowest sample. + + The number of samples to generate. + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. + + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an array with each field set to the same value. + + The number of samples to generate. + The value that each field should be set to. + + + + Create an infinite sequence where each element has the same value. + + The value that each element should be set to. + + + + Create a Heaviside Step sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. + + + + Create an infinite Heaviside Step sample sequence. + + The maximal reached peak. + Offset to the time axis. + + + + Create a Kronecker Delta impulse sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + The maximal reached peak. + Offset to the time axis, hence the sample index of the impulse. + + + + Create a periodic Kronecker Delta impulse sample vector. + + The number of samples to generate. + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Generate samples generated by the given computation. + + + + + Generate an infinite sequence generated by the given computation. + + + + + Generate a Fibonacci sequence, including zero as first value. + + + + + Generate an infinite Fibonacci sequence, including zero as first value. + + + + + Create random samples, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create an infinite random sample sequence, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create samples with independent amplitudes of standard distribution. + + + + + Create an infinite sample sequence with independent amplitudes of standard distribution. + + + + + Create samples with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Generate samples by sampling a function at samples from a probability distribution. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution. + + + + + Globalized String Handling Helpers + + + + + Tries to get a from the format provider, + returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format + provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Globalized Parsing: Tokenize a node by splitting it into several nodes. + + Node that contains the trimmed string to be tokenized. + List of keywords to tokenize by. + keywords to skip looking for (because they've already been handled). + + + + Globalized Parsing: Parse a double number + + First token of the number. + The parsed double number using the current culture information. + + + + + Globalized Parsing: Parse a float number + + First token of the number. + The parsed float number using the current culture information. + + + + + Calculates r^2, the square of the sample correlation coefficient between + the observed outcomes and the observed predictor values. + Not to be confused with R^2, the coefficient of determination, see . + + The modelled/predicted values + The observed/actual values + Squared Person product-momentum correlation coefficient. + + + + Calculates r, the sample correlation coefficient between the observed outcomes + and the observed predictor values. + + The modelled/predicted values + The observed/actual values + Person product-momentum correlation coefficient. + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The Standard Error of the regression + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The degrees of freedom by which the + number of samples is reduced for performing the Standard Error calculation + The Standard Error of the regression + + + + Calculates the R-Squared value, also known as coefficient of determination, + given some modelled and observed values. + + The values expected from the model. + The actual values obtained. + Coefficient of determination. + + + + Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). + + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed from the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Generate the frequencies corresponding to each index in frequency space. + The frequency space has a resolution of sampleRate/N. + Index 0 corresponds to the DC part, the following indices correspond to + the positive frequencies up to the Nyquist frequency (sampleRate/2), + followed by the negative frequencies wrapped around. + + Number of samples. + The sampling rate of the time-space data. + + + + Fourier Transform Convention + + + + + Inverse integrand exponent (forward: positive sign; inverse: negative sign). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling and common exponent (used in Maple). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] + + + + + Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] + + + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + + + Naive forward DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Hartley Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DHT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Hartley Transform Convention Options. + Corresponding time-space vector. + + + + Rescale FFT-the resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Rescale the iFFT-resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Naive generic DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Corresponding frequency-space vector. + + + + Hartley Transform Convention + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling. + + + + + Numerical Integration (Quadrature). + + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Numerical Contour Integration of a complex-valued function over a real variable,. + + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Analytic integration algorithm for smooth functions with no discontinuities + or derivative discontinuities and no poles inside the interval. + + + + + Maximum number of iterations, until the asked + maximum error is (likely to be) satisfied. + + + + + Approximate the integral by the double exponential transformation + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximate the integral by the double exponential transformation + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Compute the abscissa vector for a single level. + + The level to evaluate the abscissa vector for. + Abscissa Vector. + + + + Compute the weight vector for a single level. + + The level to evaluate the weight vector for. + Weight Vector. + + + + Precomputed abscissa vector per level. + + + + + Precomputed weight vector per level. + + + + + Getter for the order. + + + + + Getter that returns a clone of the array containing the Kronrod abscissas. + + + + + Getter that returns a clone of the array containing the Kronrod weights. + + + + + Getter that returns a clone of the array containing the Gauss weights. + + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth function to integrate + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth complex function to integrate, defined on the real axis. + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + + Initializes a new instance of the class. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + Gettter for the ith abscissa. + + Index of the ith abscissa. + The ith abscissa. + + + + Getter that returns a clone of the array containing the abscissas. + + + + + Getter for the ith weight. + + Index of the ith weight. + The ith weight. + + + + Getter that returns a clone of the array containing the weights. + + + + + Getter for the order. + + + + + Getter for the InvervalBegin. + + + + + Getter for the InvervalEnd. + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth function to integrate. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + Contains a method to compute the Gauss-Kronrod abscissas/weights. + + + + + Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + + + Computes the Gauss-Kronrod abscissas/weights and Gauss weights. + + Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. + Object containing the non-negative abscissas/weights, order. + + + + Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. + + + + + Return value and derivative of a Legendre series at given points. + + + + + Return value and derivative of a Legendre polynomial of order at given points. + + + + + Creates a Gauss-Kronrod point. + + + + + Getter for the GaussKronrodPoint. + + Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, and order. + + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Computes the Gauss-Legendre abscissas/weights. + See Pavel Holoborodko for a description of the algorithm. + + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. 1e-10 is usually fine. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Creates and maps a Gauss-Legendre point. + + + + + Getter for the GaussPoint. + + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Getter for the GaussPoint. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Contains the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + + Contains two GaussPoint. + + + + + Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. + + + Wikipedia - Trapezium Rule + + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, define don real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation algorithm for definite integrals by Simpson's rule. + + + + + Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Even number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Interpolation Factory. + + + + + Creates an interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Bulirsch Stoer rational interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a barycentric polynomial interpolation where the given sample points are equidistant. + + The sample points t, must be equidistant. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted + instead, which is more efficient. + + + + + Create a Neville polynomial interpolation based on arbitrary points. + If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. + Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a piecewise linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted + instead, which is more efficient. + + + + + Create piecewise log-linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted + instead, which is more efficient. + + + + + Create an piecewise natural cubic spline interpolation based on arbitrary points, + with zero secondary derivatives at the boundaries. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted + instead, which is more efficient. + + + + + Create an piecewise cubic Akima spline interpolation based on arbitrary points. + Akima splines are robust to outliers. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted + instead, which is more efficient. + + + + + Create a piecewise cubic Hermite spline interpolation based on arbitrary points + and their slopes/first derivative. + + The sample points t. + The sample point values x(t). + The slope at the sample points. Optimized for arrays. + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted + instead, which is more efficient. + + + + + Create a step-interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Barycentric Interpolation Algorithm. + + Supports neither differentiation nor integration. + + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + Barycentric weights (N), sorted ascendingly by x. + + + + Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. + + + + + Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. + + + + + Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. + + + + This algorithm supports neither differentiation nor integration. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Cubic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + third order spline coefficients (N) + + + + Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + + + + + Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + Akima splines are robust to outliers. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + + + + + Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, + and custom boundary/termination conditions. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + + + + + Create a natural cubic spline interpolation from a set of (x,y) value pairs + and zero second derivatives at the two boundaries, sorted ascendingly by x. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + + + + + Three-Point Differentiation Helper. + + Sample Points t. + Sample Values x(t). + Index of the point of the differentiation. + Index of the first sample. + Index of the second sample. + Index of the third sample. + The derivative approximation. + + + + Tridiagonal Solve Helper. + + The a-vector[n]. + The b-vector[n], will be modified by this function. + The c-vector[n]. + The d-vector[n], will be modified by this function. + The x-vector[n] + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Interpolation within the range of a discrete set of known data points. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Piece-wise Linear Interpolation. + + Supports both differentiation and integration. + + + Sample points (N+1), sorted ascending + Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients + Slopes (N) at the sample points (first order coefficients): N + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Piece-wise Log-Linear Interpolation + + This algorithm supports differentiation, not integration. + + + + Internal Spline Interpolation + + + + Sample points (N), sorted ascending + Natural logarithm of the sample values (N) at the corresponding points + + + + Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Lagrange Polynomial Interpolation using Neville's Algorithm. + + + + This algorithm supports differentiation, but doesn't support integration. + + + When working with equidistant or Chebyshev sample points it is + recommended to use the barycentric algorithms specialized for + these cases instead of this arbitrary Neville algorithm. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Quadratic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Left and right boundary conditions. + + + + + Natural Boundary (Zero second derivative). + + + + + Parabolically Terminated boundary. + + + + + Fixed first derivative at the boundary. + + + + + Fixed second derivative at the boundary. + + + + + A step function where the start of each segment is included, and the last segment is open-ended. + Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. + The domain of the function is all real numbers, such that y = 0 where x <. + + Supports both differentiation and integration. + + + Sample points (N), sorted ascending + Samples values (N) of each segment starting at the corresponding sample point. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t. + + + + + Wraps an interpolation with a transformation of the interpolated values. + + Neither differentiation nor integration is supported. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use, + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a double dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + Matrix V is encoded in the property EigenVectors in the way that: + - column corresponding to real eigenvalue represents real eigenvector, + - columns corresponding to the pair of complex conjugate eigenvalues + lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + double version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + double version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiply this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply this one by. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a float dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + float version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a float sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + float version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the Frobenius norm of this matrix. + The Frobenius norm of this matrix. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex value z1 + Complex value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex32 value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex32 value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex32 value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex32 dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex32 value z1 + Complex32 value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + Supported data types are double, single, , and . + + + + Gets the lower triangular form of the Cholesky matrix. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + Supported data types are double, single, , and . + + + + Gets or sets a value indicating whether matrix is symmetric or not + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Gets or sets the eigen values (λ) of matrix in ascending value. + + + + + Gets or sets eigenvectors. + + + + + Gets or sets the block diagonal eigenvalue matrix. + + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + Supported data types are double, single, , and . + + + + Classes that solves a system of linear equations, AX = B. + + Supported data types are double, single, , and . + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, Ax = b + + The right hand side vector, b. + The left hand side Vector, x. + + + + Solves a system of linear equations, Ax = b. + + The right hand side vector, b. + The left hand side Matrix>, x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + Supported data types are double, single, , and . + + + + Gets the lower triangular factor. + + + + + Gets the upper triangular factor. + + + + + Gets the permutation applied to LU factorization. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + The type of QR factorization go perform. + + + + + Compute the full QR factorization of a matrix. + + + + + Compute the thin QR factorization of a matrix. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + Supported data types are double, single, , and . + + + + Gets or sets orthogonal Q matrix + + + + + Gets the upper triangular factor R. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + Supported data types are double, single, , and . + + + Indicating whether U and VT matrices have been computed during SVD factorization. + + + + Gets the singular values (Σ) of matrix in ascending value. + + + + + Gets the left singular vectors (U - m-by-m unitary matrix) + + + + + Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) + + + + + Returns the singular values as a diagonal . + + The singular values as a diagonal . + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + Supported data types are double, single, , and . + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + + + + The value of 1.0. + + + + + The value of 0.0. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar denominator to use. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar numerator to use. + The matrix to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. + + The exponent matrix to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Adds a scalar to each element of the matrix. + + The scalar to add. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds a scalar to each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix. + + The scalar to subtract. + A new matrix containing the subtraction of this matrix and the scalar. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts each element of the matrix from a scalar. + + The scalar to subtract from. + A new matrix containing the subtraction of the scalar and this matrix. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of this matrix with a scalar. + + The scalar to multiply with. + The result of the multiplication. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides each element of this matrix with a scalar. + + The scalar to divide with. + The result of the division. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides a scalar by each element of the matrix. + + The scalar to divide. + The result of the division. + + + + Divides a scalar by each element of the matrix and places results into the result matrix. + + The scalar to divide. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.ColumnCount != rightSide.Count. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.RowCount. + If this.ColumnCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ). + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.Rows. + If the result matrix's dimensions are not the this.Rows x other.Columns. + + + + Multiplies this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.Rows. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with the conjugate transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the conjugate transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Raises this square matrix to a positive integer exponent and places the results into the result matrix. + + The positive integer exponent to raise the matrix to. + The result of the power. + + + + Multiplies this square matrix with another matrix and returns the result. + + The positive integer exponent to raise the matrix to. + + + + Negate each element of this matrix. + + A matrix containing the negated values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + if the result matrix's dimensions are not the same as this matrix. + + + + Complex conjugate each element of this matrix. + + A matrix containing the conjugated values. + + + + Complex conjugate each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + if the result matrix's dimensions are not the same as this matrix. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Pointwise multiplies this matrix with another matrix. + + The matrix to pointwise multiply with this one. + If this matrix and are not the same size. + A new matrix that is the pointwise multiplication of this matrix and . + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise divide this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + A new matrix that is the pointwise division of this matrix and . + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise modulus. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise remainder. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Helper function to apply a unary function to a matrix. The function + f modifies the matrix given to it in place. Before its + called, a copy of the 'this' matrix is first created, then passed to + f. The copy is then returned as the result + + Function which takes a matrix, modifies it in place and returns void + New instance of matrix which is the result + + + + Helper function to apply a unary function which modifies a matrix + in place. + + Function which takes a matrix, modifies it in place and returns void + The matrix to be passed to f and where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two matrices + and modifies the latter in place. A copy of the "this" matrix is + first made and then passed to f together with the other matrix. The + copy is then returned as the result + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The resulting matrix + If this matrix and are not the same dimension. + + + + Helper function to apply a binary function which takes two matrices + and modifies the second one in place + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The matrix to store the result. + The resulting matrix + If this matrix and are not the same dimension. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + + + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + The other matrix 'y' + The matrix with the result and 'x' + + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Calculates the rank of the matrix. + + effective numerical rank, obtained from SVD + + + + Calculates the nullity of the matrix. + + effective numerical nullity, obtained from SVD + + + Calculates the condition number of this matrix. + The condition number of the matrix. + The condition number is calculated using singular value decomposition. + + + Computes the determinant of this matrix. + The determinant of this matrix. + + + + Computes an orthonormal basis for the null space of this matrix, + also known as the kernel of the corresponding matrix transformation. + + + + + Computes an orthonormal basis for the column space of this matrix, + also known as the range or image of the corresponding matrix transformation. + + + + Computes the inverse of this matrix. + The inverse of this matrix. + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. + In a later release, it will be replaced with a sparse implementation. + + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Returns a string that describes the type, dimensions and shape of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes this matrix. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Matrix class. + + + + + Gets the raw matrix data storage. + + + + + Gets the number of columns. + + The number of columns. + + + + Gets the number of rows. + + The number of rows. + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + + + + Sets the value of the given element without range checking. + + + The row of the element. + + + The column of the element. + + + The value to set the element to. + + + + + Sets all values to zero. + + + + + Sets all values of a row to zero. + + + + + Sets all values of a column to zero. + + + + + Sets all values for all of the chosen rows to zero. + + + + + Sets all values for all of the chosen columns to zero. + + + + + Sets all values of a sub-matrix to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Creates a clone of this instance. + + + A clone of the instance. + + + + + Copies the elements of this matrix to the given matrix. + + + The matrix to copy values into. + + + If target is . + + + If this and the target matrix do not have the same dimensions.. + + + + + Copies a row into an Vector. + + The row to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of rows. + + + + Copies a row into to the given Vector. + + The row to copy. + The Vector to copy the row into. + If the result vector is . + If is negative, + or greater than or equal to the number of rows. + If this.Columns != result.Count. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of rows. + is negative, + or greater than or equal to the number of columns. + (columnIndex + length) >= Columns. + If is not positive. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Copies a column into a new Vector>. + + The column to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of columns. + + + + Copies a column into to the given Vector. + + The column to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If this.Rows != result.Count. + + + + Copies the requested column elements into a new Vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of columns. + is negative, + or greater than or equal to the number of rows. + (rowIndex + length) >= Rows. + + If is not positive. + + + + Copies the requested column elements into the given vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Returns the elements of the diagonal in a Vector. + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a new matrix and inserts the given column at the given index. + + The index of where to insert the column. + The column to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of columns. + If the size of != the number of rows. + + + + Creates a new matrix with the given column removed. + + The index of the column to remove. + A new matrix without the chosen column. + If is < zero or >= the number of columns. + + + + Copies the values of the given Vector to the specified column. + + The column to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given Vector to the specified sub-column. + + The column to copy the values to. + The row to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given array to the specified column. + + The column to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + If the size of does not + equal the number of rows of this Matrix. + + + + Creates a new matrix and inserts the given row at the given index. + + The index of where to insert the row. + The row to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of rows. + If the size of != the number of columns. + + + + Creates a new matrix with the given row removed. + + The index of the row to remove. + A new matrix without the chosen row. + If is < zero or >= the number of rows. + + + + Copies the values of the given Vector to the specified row. + + The row to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given Vector to the specified sub-row. + + The row to copy the values to. + The column to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given array to the specified row. + + The row to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The column to start copying to. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The number of rows to copy. Must be positive. + The column to start copying to. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The row of the sub-matrix to start copying from. + The number of rows to copy. Must be positive. + The column to start copying to. + The column of the sub-matrix to start copying from. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of the given Vector to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Returns the transpose of this matrix. + + The transpose of this matrix. + + + + Puts the transpose of this matrix into the result matrix. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + + + + Concatenates this matrix with the given matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Concatenates this matrix with the given matrix and places the result into the result matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, + where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. + The values of off the off diagonal matrices/blocks are set to zero. + + The lower, right matrix. + If lower is . + the combined matrix + + + + + + Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. + + The lower, right matrix. + The combined matrix + If lower is . + If the result matrix is . + If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). + + + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Returns this matrix as a multidimensional array. + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + + A multidimensional containing the values of this matrix. + + + + Returns the matrix's elements as an array with the data laid out column by column (column major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the matrix's elements as an array with the data laid row by row (row major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns this matrix as array of row arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns this matrix as array of column arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the matrix will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowArrays instead if you always need an independent array. + + + + + Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnArrays instead if you always need an independent array. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. + + The column to start enumerating over. + The number of columns to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. + + The column to start enumerating over. + The number of columns to enumerating over. + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. + + The row to start enumerating over. + The number of rows to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. + + The row to start enumerating over. + The number of rows to enumerating over. + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Applies a function to each value of this matrix and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value with its result. + The row and column indices of each value (zero-based) are passed as first arguments to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each row. + + + + + For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each column. + + + + + Applies a function f to each row vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Applies a function f to each column vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Reduces all row vectors by applying a function between two of them, until only a single vector is left. + + + + + Reduces all column vectors by applying a function between two of them, until only a single vector is left. + + + + + Applies a function to each value pair of two matrices and replaces the value in the result vector. + + + + + Applies a function to each value pair of two matrices and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two matrices and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two matrices of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to add. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to add. + The right matrix to add. + The result of the addition. + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts a scalar from each element of a matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to subtract. + The scalar value to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts each element of a matrix from a scalar. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Divides a scalar with a matrix. + + The scalar to divide. + The matrix. + The result of the division. + If is . + + + + Divides a matrix with a scalar. + + The matrix to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of the matrix of the given divisor. + + The matrix whose elements we want to compute the modulus of. + The divisor to use. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the matrix. + + The dividend we want to compute the modulus of. + The matrix whose elements we want to use as divisor. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two matrices. + + The matrix whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a matrix pointwise + + The input matrix + + + + + Computes the exponential of a matrix pointwise + + The input matrix + + + + + Computes the log of a matrix pointwise + + The input matrix + + + + + Computes the log10 of a matrix pointwise + + The input matrix + + + + + Computes the sin of a matrix pointwise + + The input matrix + + + + + Computes the cos of a matrix pointwise + + The input matrix + + + + + Computes the tan of a matrix pointwise + + The input matrix + + + + + Computes the asin of a matrix pointwise + + The input matrix + + + + + Computes the acos of a matrix pointwise + + The input matrix + + + + + Computes the atan of a matrix pointwise + + The input matrix + + + + + Computes the sinh of a matrix pointwise + + The input matrix + + + + + Computes the cosh of a matrix pointwise + + The input matrix + + + + + Computes the tanh of a matrix pointwise + + The input matrix + + + + + Computes the absolute value of a matrix pointwise + + The input matrix + + + + + Computes the floor of a matrix pointwise + + The input matrix + + + + + Computes the ceiling of a matrix pointwise + + The input matrix + + + + + Computes the rounded value of a matrix pointwise + + The input matrix + + + + + Computes the Cholesky decomposition for a matrix. + + The Cholesky decomposition object. + + + + Computes the LU decomposition for a matrix. + + The LU decomposition object. + + + + Computes the QR decomposition for a matrix. + + The type of QR factorization to perform. + The QR decomposition object. + + + + Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. + + The QR decomposition object. + + + + Computes the SVD decomposition for a matrix. + + Compute the singular U and VT vectors or not. + The SVD decomposition object. + + + + Computes the EVD decomposition for a matrix. + + The EVD decomposition object. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The result matrix X. + + + + Converts a matrix to single precision. + + + + + Converts a matrix to double precision. + + + + + Converts a matrix to single precision complex numbers. + + + + + Converts a matrix to double precision complex numbers. + + + + + Gets a single precision complex matrix with the real parts from the given matrix. + + + + + Gets a double precision complex matrix with the real parts from the given matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Existing data may not be all zeros, so clearing may be necessary + if not all of it will be overwritten anyway. + + + + + If existing data is assumed to be all zeros already, + clearing it may be skipped if applicable. + + + + + Allow skipping zero entries (without enforcing skipping them). + When enumerating sparse matrices this can significantly speed up operations. + + + + + Force applying the operation to all fields even if they are zero. + + + + + It is not known yet whether a matrix is symmetric or not. + + + + + A matrix is symmetric + + + + + A matrix is Hermitian (conjugate symmetric). + + + + + A matrix is not symmetric + + + + + Defines an that uses a cancellation token as stop criterion. + + + + + Initializes a new instance of the class. + + + + + Initializes a new instance of the class. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Stop criterion that delegates the status determination to a delegate. + + + + + Create a new instance of this criterion with a custom implementation. + + Custom implementation with the same signature and semantics as the DetermineStatus method. + + + + Determines the status of the iterative calculation by delegating it to the provided delegate. + Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + + + + Clones this criterion and its settings. + + + + + Monitors an iterative calculation for signs of divergence. + + + + + The maximum relative increase the residual may experience without triggering a divergence warning. + + + + + The number of iterations over which a residual increase should be tracked before issuing a divergence warning. + + + + + The status of the calculation + + + + + The array that holds the tracking information. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified maximum + relative increase and the specified minimum number of tracking iterations. + + The maximum relative increase that the residual may experience before a divergence warning is issued. + The minimum number of iterations over which the residual must grow before a divergence warning is issued. + + + + Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. + + Thrown if the Maximum is set to zero or below. + + + + Gets or sets the minimum number of iterations over which the residual must grow before + issuing a divergence warning. + + Thrown if the value is set to less than one. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Detect if solution is diverging + + true if diverging, otherwise false + + + + Gets required history Length + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Defines an that monitors residuals for NaN's. + + + + + The status of the calculation + + + + + The iteration number of the last iteration. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + The base interface for classes that provide stop criteria for iterative calculations. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current IIterationStopCriterion. Status is set to Status field of current object. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + is not a legal value. Status should be set in implementation. + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + To implementers: Invoking this method should not clear the user defined + property values, only the state that is used to track the progress of the + calculation. + + + + Defines the interface for classes that solve the matrix equation Ax = b in + an iterative manner. + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Defines the interface for objects that can create an iterative solver with + specific settings. This interface is used to pass iterative solver creation + setup information around. + + + + + Gets the type of the solver that will be created by this setup object. + + + + + Gets type of preconditioner, if any, that will be created by this setup object. + + + + + Creates the iterative solver to be used. + + + + + Creates the preconditioner to be used by default (can be overwritten). + + + + + Gets the relative speed of the solver. + + Returns a value between 0 and 1, inclusive. + + + + Gets the relative reliability of the solver. + + Returns a value between 0 and 1 inclusive. + + + + The base interface for preconditioner classes. + + + + Preconditioners are used by iterative solvers to improve the convergence + speed of the solving process. Increase in convergence speed + is related to the number of iterations necessary to get a converged solution. + So while in general the use of a preconditioner means that the iterative + solver will perform fewer iterations it does not guarantee that the actual + solution time decreases given that some preconditioners can be expensive to + setup and run. + + + Note that in general changes to the matrix will invalidate the preconditioner + if the changes occur after creating the preconditioner. + + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix on which the preconditioner is based. + + + + Approximates the solution to the matrix equation Mx = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Defines an that monitors the numbers of iteration + steps as stop criterion. + + + + + The default value for the maximum number of iterations the process is allowed + to perform. + + + + + The maximum number of iterations the calculation is allowed to perform. + + + + + The status of the calculation + + + + + Initializes a new instance of the class with the default maximum + number of iterations. + + + + + Initializes a new instance of the class with the specified maximum + number of iterations. + + The maximum number of iterations the calculation is allowed to perform. + + + + Gets or sets the maximum number of iterations the calculation is allowed to perform. + + Thrown if the Maximum is set to a negative value. + + + + Returns the maximum number of iterations to the default. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Iterative Calculation Status + + + + + An iterator that is used to check if an iterative calculation should continue or stop. + + + + + The collection that holds all the stop criteria and the flag indicating if they should be added + to the child iterators. + + + + + The status of the iterator. + + + + + Initializes a new instance of the class with the default stop criteria. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Gets the current calculation status. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual iterators may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Indicates to the iterator that the iterative process has been cancelled. + + + Does not reset the stop-criteria. + + + + + Resets the to the pre-calculation state. + + + + + Creates a deep clone of the current iterator. + + The deep clone of the current iterator. + + + + Defines an that monitors residuals as stop criterion. + + + + + The maximum value for the residual below which the calculation is considered converged. + + + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + The status of the calculation + + + + + The number of iterations since the residuals got below the maximum. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified + maximum residual and minimum number of iterations. + + + The maximum value for the residual below which the calculation is considered converged. + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + Gets or sets the maximum value for the residual below which the calculation is considered + converged. + + Thrown if the Maximum is set to a negative value. + + + + Gets or sets the minimum number of iterations for which the residual has to be + below the maximum before the calculation is considered converged. + + Thrown if the BelowMaximumFor is set to a value less than 1. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Loads the available objects from the specified assembly. + + The assembly which will be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The type in the assembly which should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The of the assembly that should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + + + + A unit preconditioner. This preconditioner does not actually do anything + it is only used when running an without + a preconditioner. + + + + + The coefficient matrix on which this preconditioner operates. + Is used to check dimensions on the different vectors that are processed. + + + + + Initializes the preconditioner and loads the internal data structures. + + + The matrix upon which the preconditioner is based. + + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + If and do not have the same size. + + + - or - + + + If the size of is different the number of rows of the coefficient matrix. + + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Evaluate the row and column at a specific data index. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + + The array containing the row indices of the existing rows. Element "i" of the array gives the index of the + element in the array that is first non-zero element in a row "i". + The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always + given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. + + + + + An array containing the column indices of the non-zero values. Element "j" of the array + is the number of the column in matrix that contains the j-th value in the array. + + + + + Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values + array using the row-major storage mapping described in a compressed sparse row (CSR) format. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Delete value from internal storage + + Index of value in nonZeroValues array + Row number of matrix + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Find item Index in nonZeroValues array + + Matrix row index + Matrix column index + Item index + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Array that contains the indices of the non-zero values. + + + + + Array that contains the non-zero elements of the vector. + + + + + Gets the number of non-zero elements in the vector. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the vector storage format is dense. + + + + + Gets or sets the value at the given index, with range checking. + + + The index of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + The index of the element. + The requested element. + Not range-checked. + + + + Sets the element without range checking. + + The index of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + + Defines the generic class for Vector classes. + + Supported data types are double, single, , and . + + + + The zero value for type T. + + + + + The value of 1.0 for type T. + + + + + Negates vector and save result to + + Target vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar denominator to use. + The vector to store the result of the division. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar numerator to use. + The vector to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Adds a scalar to each element of the vector. + + The scalar to add. + A copy of the vector with the scalar added. + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + If this vector and are not the same size. + + + + Adds another vector to this vector. + + The vector to add to this one. + A new vector containing the sum of both vectors. + If this vector and are not the same size. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Subtracts a scalar from each element of the vector. + + The scalar to subtract. + A new vector containing the subtraction of this vector and the scalar. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Subtracts each element of the vector from a scalar. + + The scalar to subtract from. + A new vector containing the subtraction of the scalar and this vector. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Returns a negated vector. + + The negated vector. + Added as an alternative to the unary negation operator. + + + + Negates vector and save result to + + Target vector + + + + Subtracts another vector from this vector. + + The vector to subtract from this one. + A new vector containing the subtraction of the two vectors. + If this vector and are not the same size. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Return vector with complex conjugate values of the source vector + + Conjugated vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector. + + The scalar to multiply. + A new vector that is the multiplication of the vector and the scalar. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + If this vector and are not the same size. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + If is not of the same size. + + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + If is not of the same size. + If is . + + + + + Divides each element of the vector by a scalar. + + The scalar to divide with. + A new vector that is the division of the vector and the scalar. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar to divide with. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Divides a scalar by each element of the vector. + + The scalar to divide. + A new vector that is the division of the vector and the scalar. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector. + + The vector to pointwise multiply with this one. + A new vector which is the pointwise multiplication of the two vectors. + If this vector and are not the same size. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector. + + The pointwise denominator vector to use. + A new vector which is the pointwise division of the two vectors. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise division. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The matrix to store the result into. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + The vector to store the result into. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise modulus. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise remainder. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Helper function to apply a unary function to a vector. The function + f modifies the vector given to it in place. Before its + called, a copy of the 'this' vector with the same dimension is + first created, then passed to f. The copy is returned as the result + + Function which takes a vector, modifies it in place and returns void + New instance of vector which is the result + + + + Helper function to apply a unary function which modifies a vector + in place. + + Function which takes a vector, modifies it in place and returns void + The vector where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes a scalar and + a vector and modifies the latter in place. A copy of the "this" + vector is therefore first made and then passed to f together with + the scalar argument. The copy is then returned as the result + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The resulting vector + + + + Helper function to apply a binary function which takes a scalar and + a vector, modifies the latter in place and returns void. + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The vector where the result will be placed + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the latter in place. A copy of the "this" vector is + first made and then passed to f together with the other vector. The + copy is then returned as the result + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the second one in place + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + The vector to store the result + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. + + The other vector + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = (sum(abs(this[i])^p))^(1/p) + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + The p value. + This vector normalized to a unit vector with respect to the p-norm. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the value of maximum element. + + The value of maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the value of the minimum element. + + The value of the minimum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Computes the sum of the absolute value of the vector's elements. + + The sum of the absolute value of the vector's elements. + + + + Indicates whether the current object is equal to another object of the same type. + + An object to compare with this object. + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Returns an enumerator that iterates through the collection. + + + A that can be used to iterate through the collection. + + + + + Returns an enumerator that iterates through a collection. + + + An object that can be used to iterate through the collection. + + + + + Returns a string that describes the type, dimensions and shape of this vector. + + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Character to use to print if there is not enough space to print all entries. Typical value: "..". + Character to use to separate two columns on a line. Typical value: " " (2 spaces). + Character to use to separate two rows/lines. Typical value: Environment.NewLine. + Function to provide a string for any given entry value. + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that represents the content of this vector, column by column. + + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector, column by column and with a type header. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Vector class. + + + + + Gets the raw vector data storage. + + + + + Gets the length or number of dimensions of this vector. + + + + Gets or sets the value at the given . + The index of the value to get or set. + The value of the vector at the given . + If is negative or + greater than the size of the vector. + + + Gets the value at the given without range checking.. + The index of the value to get or set. + The value of the vector at the given . + + + Sets the at the given without range checking.. + The index of the value to get or set. + The value to set. + + + + Resets all values to zero. + + + + + Sets all values of a subvector to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Returns a deep-copy clone of the vector. + + A deep-copy clone of the vector. + + + + Set the values of this vector to the given values. + + The array containing the values to use. + If is . + If is not the same size as this vector. + + + + Copies the values of this vector into the target vector. + + The vector to copy elements into. + If is . + If is not the same size as this vector. + + + + Creates a vector containing specified elements. + + The first element to begin copying from. + The number of elements to copy. + A vector containing a copy of the specified elements. + If is not positive or + greater than or equal to the size of the vector. + If + is greater than or equal to the size of the vector. + + If is not positive. + + + + Copies the values of a given vector into a region in this vector. + + The field to start copying to + The number of fields to copy. Must be positive. + The sub-vector to copy from. + If is + + + + Copies the requested elements from this vector to another. + + The vector to copy the elements to. + The element to start copying from. + The element to start copying to. + The number of elements to copy. + + + + Returns the data contained in the vector as an array. + The returned array will be independent from this vector. + A new memory block will be allocated for the array. + + The vector's data as an array. + + + + Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the vector will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Create a matrix based on this vector in column form (one single column). + + + This vector as a column matrix. + + + + + Create a matrix based on this vector in row form (one single row). + + + This vector as a row matrix. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Applies a function to each value of this vector and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value with its result. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value pair of two vectors and replaces the value in the result vector. + + + + + Applies a function to each value pair of two vectors and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two vectors and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two vectors of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Vector containing the same values of . + + This method is included for completeness. + The vector to get the values from. + A vector containing the same values as . + If is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Adds a scalar to each element of a vector. + + The vector to add to. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of a vector. + + The scalar value to add. + The vector to add to. + The result of the addition. + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of a vector. + + The vector to subtract from. + The scalar value to subtract. + The result of the subtraction. + If is . + + + + Subtracts each element of a vector from a scalar. + + The scalar value to subtract from. + The vector to subtract. + The result of the subtraction. + If is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a scalar with a vector. + + The scalar to divide. + The vector. + The result of the division. + If is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Pointwise divides two Vectors. + + The vector to divide. + The other vector. + The result of the division. + If and are not the same size. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the vector. + + The dividend we want to compute the remainder of. + The vector whose elements we want to use as divisor. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two vectors. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a vector pointwise + + The input vector + + + + + Computes the exponential of a vector pointwise + + The input vector + + + + + Computes the log of a vector pointwise + + The input vector + + + + + Computes the log10 of a vector pointwise + + The input vector + + + + + Computes the sin of a vector pointwise + + The input vector + + + + + Computes the cos of a vector pointwise + + The input vector + + + + + Computes the tan of a vector pointwise + + The input vector + + + + + Computes the asin of a vector pointwise + + The input vector + + + + + Computes the acos of a vector pointwise + + The input vector + + + + + Computes the atan of a vector pointwise + + The input vector + + + + + Computes the sinh of a vector pointwise + + The input vector + + + + + Computes the cosh of a vector pointwise + + The input vector + + + + + Computes the tanh of a vector pointwise + + The input vector + + + + + Computes the absolute value of a vector pointwise + + The input vector + + + + + Computes the floor of a vector pointwise + + The input vector + + + + + Computes the ceiling of a vector pointwise + + The input vector + + + + + Computes the rounded value of a vector pointwise + + The input vector + + + + + Converts a vector to single precision. + + + + + Converts a vector to double precision. + + + + + Converts a vector to single precision complex numbers. + + + + + Converts a vector to double precision complex numbers. + + + + + Gets a single precision complex vector with the real parts from the given vector. + + + + + Gets a double precision complex vector with the real parts from the given vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response vector Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response matrix Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor-Response samples as tuples + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor-Response samples as tuples + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response matrix Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Weighted Linear Regression using normal equations. + + List of sample vectors (predictor) together with their response. + List of weights, one for each sample. + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Locally-Weighted Linear Regression using normal equations. + + + + + Locally-Weighted Linear Regression using normal equations. + + + + + First Order AB method(same as Forward Euler) + + Initial value + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Second Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Third Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Fourth Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + ODE Solver Algorithms + + + + + Second Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Second Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems + http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz + + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The lower bound + The upper bound + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems + + + + + Creates BFGS minimizer + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + + Creates a base class for BFGS minimization + + + + + Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima + See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm + Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp + + + + + Finds a minimum of a function by the BFGS quasi-Newton method + This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian + + An initial guess + Evaluates the function at a point + Evaluates the gradient of the function at a point + The minimum found + + + + Objective function with a frozen evaluation that must not be changed from the outside. + + + + Create a new unevaluated and independent copy of this objective function + + + + Objective function with a mutable evaluation. + + + + Create a new independent copy of this objective function, evaluated at the same point. + + + + Get the y-values of the observations. + + + + + Get the values of the weights for the observations. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the values of the parameters. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector. G = J'(y - f(x; p)) + + + + + Get the approximated Hessian matrix. H = J'J + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Get the degree of freedom. + + + + + The scale factor for initial mu + + + + + Non-linear least square fitting by the Levenberg-Marduardt algorithm. + + The objective function, including model, observations, and parameter bounds. + The initial guess values. + The initial damping parameter of mu. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for L2 norm of the residuals. + The max iterations. + The result of the Levenberg-Marquardt minimization + + + + Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm + + + + + + Creates L-BFGS minimizer + + Numbers of gradients and steps to store. + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe + Conditions are + i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) + ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) + where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. + + Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + references: + http://en.wikipedia.org/wiki/Wolfe_conditions + http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + + + Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + + + + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + The upper bound + + + + Creates a base class for minimization + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. + Called fminsearch() in Matlab. A description of the algorithm can be found at + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + or + https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method + + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Evaluate the objective function at each vertex to create a corresponding + list of error values for each vertex + + + + + + + + Check whether the points in the error profile have so little range that we + consider ourselves to have converged + + + + + + + + + Examine all error values to determine the ErrorProfile + + + + + + + Construct an initial simplex, given starting guesses for the constants, and + initial step sizes for each dimension + + + + + + + Test a scaling operation of the high point, and replace it if it is an improvement + + + + + + + + + + + Contract the simplex uniformly around the lowest point + + + + + + + + + Compute the centroid of all points except the worst + + + + + + + + The value of the constant + + + + + Returns the best fit parameters. + + + + + Returns the standard errors of the corresponding parameters + + + + + Returns the y-values of the fitted model that correspond to the independent values. + + + + + Returns the covariance matrix at minimizing point. + + + + + Returns the correlation matrix at minimizing point. + + + + + The stopping threshold for the function value or L2 norm of the residuals. + + + + + The stopping threshold for L2 norm of the change of the parameters. + + + + + The stopping threshold for infinity norm of the gradient. + + + + + The maximum number of iterations. + + + + + The lower bound of the parameters. + + + + + The upper bound of the parameters. + + + + + The scale factors for the parameters. + + + + + Objective function where neither Gradient nor Hessian is available. + + + + + Objective function where the Gradient is available. Greedy evaluation. + + + + + Objective function where the Gradient is available. Lazy evaluation. + + + + + Objective function where the Hessian is available. Greedy evaluation. + + + + + Objective function where the Hessian is available. Lazy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Greedy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Lazy evaluation. + + + + + Objective function where neither first nor second derivative is available. + + + + + Objective function where the first derivative is available. + + + + + Objective function where the first and second derivatives are available. + + + + + objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective function with a user supplied jacobian for nonlinear least squares regression. + + + + + Objective function for nonlinear least squares regression. + The numerical jacobian with accuracy order is used. + + + + + Adapts an objective function with only value implemented + to provide a gradient as well. Gradient calculation is + done using the finite difference method, specifically + forward differences. + + For each gradient computed, the algorithm requires an + additional number of function evaluations equal to the + functions's number of input parameters. + + + + + Set or get the values of the independent variable. + + + + + Set or get the values of the observations. + + + + + Set or get the values of the weights for the observations. + + + + + Get whether parameters are fixed or free. + + + + + Get the number of observations. + + + + + Get the number of unknown parameters. + + + + + Get the degree of freedom + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Set or get the values of the parameters. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector of x and p. + + + + + Get the Hessian matrix of x and p, J'WJ + + + + + Set observed data to fit. + + + + + Set parameters and bounds. + + The initial values of parameters. + The list to the parameters fix or free. + + + + Non-linear least square fitting by the trust region dogleg algorithm. + + + + + The trust region subproblem. + + + + + The stopping threshold for the trust region radius. + + + + + Non-linear least square fitting by the trust-region algorithm. + + The objective model, including function, jacobian, observations, and parameter bounds. + The subproblem + The initial guess values. + The stopping threshold for L2 norm of the residuals. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for trust region radius + The max iterations. + + + + + Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. + + + + + Class to represent a permutation for a subset of the natural numbers. + + + + + Entry _indices[i] represents the location to which i is permuted to. + + + + + Initializes a new instance of the Permutation class. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + + + + Gets the number of elements this permutation is over. + + + + + Computes where permutes too. + + The index to permute from. + The index which is permuted to. + + + + Computes the inverse of the permutation. + + The inverse of the permutation. + + + + Construct an array from a sequence of inversions. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + The set of inversions to construct the permutation from. + A permutation generated from a sequence of inversions. + + + + Construct a sequence of inversions from the permutation. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + A sequence of inversions. + + + + Checks whether the array represents a proper permutation. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + True if represents a proper permutation, false otherwise. + + + + A single-variable polynomial with real-valued coefficients and non-negative exponents. + + + + + The coefficients of the polynomial in a + + + + + Only needed for the ToString method + + + + + Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. + The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. + + + + + Create a zero-polynomial with a coefficient array of the given length. + An array of length N can support polynomials of a degree of at most N-1. + + Length of the coefficient array + + + + Create a zero-polynomial + + + + + Create a constant polynomial. + Example: 3.0 -> "p : x -> 3.0" + + The coefficient of the "x^0" monomial. + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as array + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as enumerable + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k + + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Calculates the complex roots of the Polynomial by eigenvalue decomposition + + a vector of complex numbers with the roots + + + + Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. + + Eigenvalue matrix A + This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix + + + + Addition of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a polynomial and a scalar. + + + + + Subtraction of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a scalar from a polynomial. + + + + + Addition of a polynomial from a scalar. + + + + + Negation of a polynomial. + + + + + Multiplies a polynomial by a polynomial (convolution) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Scales a polynomial by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Scales a polynomial by division by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r + + Left polynomial + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Point-wise division of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Point-wise multiplication of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Division of two polynomials returning the quotient-with-remainder of the two polynomials given + + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Addition of two Polynomials (piecewise) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Scalar value + Polynomial + Resulting Polynomial + + + + Subtraction of two polynomial. + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Subtracts a scalar from a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + Subtracts a polynomial from a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Negates a polynomial. + + Polynomial + Resulting Polynomial + + + + Multiplies a polynomial by a polynomial (convolution). + + Left polynomial + Right polynomial + resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Polynomial + Scalar value + Resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Divides a polynomial by scalar value. + + Polynomial + Scalar value + Resulting Polynomial + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Utilities for working with floating point numbers. + + + + Useful links: + + + http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic + + + http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon + + + + + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The relative accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Checks if a given double values is finite, i.e. neither NaN nor inifnity + + The value to be checked fo finitenes. + + + + The number of binary digits used to represent the binary number for a double precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + The number of binary digits used to represent the binary number for a single precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + On a standard machine this is equivalent to `DoublePrecision`. + + + + + Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + On a standard machine this is equivalent to `PositiveDoublePrecision`. + + + + + The number of significant decimal places of double-precision floating numbers (64 bit). + + + + + The number of significant decimal places of single-precision floating numbers (32 bit). + + + + + Value representing 10 * 2^(-53) = 1.11022302462516E-15 + + + + + Value representing 10 * 2^(-24) = 5.96046447753906E-07 + + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. + + The value. + The value of the number. + + + + Returns a 'directional' long value. This is a long value which acts the same as a double, + e.g. a negative double value will return a negative double value starting at 0 and going + more negative as the double value gets more negative. + + The input double value. + A long value which is roughly the equivalent of the double value. + + + + Returns a 'directional' int value. This is a int value which acts the same as a float, + e.g. a negative float value will return a negative int value starting at 0 and going + more negative as the float value gets more negative. + + The input float value. + An int value which is roughly the equivalent of the double value. + + + + Increments a floating point number to the next bigger number representable by the data type. + + The value which needs to be incremented. + How many times the number should be incremented. + + The incrementation step length depends on the provided value. + Increment(double.MaxValue) will return positive infinity. + + The next larger floating point value. + + + + Decrements a floating point number to the next smaller number representable by the data type. + + The value which should be decremented. + How many times the number should be decremented. + + The decrementation step length depends on the provided value. + Decrement(double.MinValue) will return negative infinity. + + The next smaller floating point value. + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The absolute threshold for to consider it as zero. + Zero if || is smaller than , otherwise. + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero. + + The real number to coerce to zero, if it is almost zero. + Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. + + + + Determines the range of floating point numbers that will match the specified value with the given tolerance. + + The value. + The ulps difference. + + Thrown if is smaller than zero. + + Tuple of the bottom and top range ends. + + + + Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is + always bigger than the value) + + The value. + The ulps difference. + The maximum floating point number which is larger than the given . + + + + Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is + always smaller than the value) + + The value. + The ulps difference. + The minimum floating point number which is smaller than the given . + + + + Determines the range of ulps that will match the specified value with the given tolerance. + + The value. + The relative difference. + + Thrown if is smaller than zero. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Tuple with the number of ULPS between the value and the value - relativeDifference as first, + and the number of ULPS between the value and the value + relativeDifference as second value. + + + + + Evaluates the count of numbers between two double numbers + + The first parameter. + The second parameter. + The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. + The number of floating point values between and . + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive double or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive float or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive double or NaN) + Evaluates the positive epsilon. See also + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive float or NaN) + Evaluates the positive epsilon. See also + + + + + Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + + Positive Machine epsilon + + + + Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + + Machine epsilon + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + Thrown if is smaller than zero. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + + + Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps + between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance + of 1 is passed then the result will be true only if the two numbers have the same binary representation + OR if they are two adjacent numbers that only differ by one step. + + + The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article + at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to + .NET enabled code without using pointers and unsafe code. + + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two vectors and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Support Interface for Precision Operations (like AlmostEquals). + + Type of the implementing class. + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + A norm of this value. + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + The value to compare with. + A norm of the difference between this and the other value. + + + + Consistency vs. performance trade-off between runs on different machines. + + + + Consistent on the same CPU only (maximum performance) + + + Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) + + + Consistent on Intel CPUs supporting SSE2 or later + + + Consistent on Intel CPUs supporting SSE4.2 or later + + + Consistent on Intel CPUs supporting AVX or later + + + Consistent on Intel CPUs supporting AVX2 or later + + + + Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsFFTProviderPath` or the default probing paths. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsFFTProvider" environment variable, + or fall back to the best provider. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 + will cause k*k in the Bluestein sequence to overflow (GH-286). + + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Half rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Fully rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Radix-2 Reorder Helper Method + + Sample type + Sample vector + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + How to transpose a matrix. + + + + + Don't transpose a matrix. + + + + + Transpose a matrix. + + + + + Conjugate transpose a complex matrix. + + If a conjugate transpose is used with a real matrix, then the matrix is just transposed. + + + + Types of matrix norms. + + + + + The 1-norm. + + + + + The Frobenius norm. + + + + + The infinity norm. + + + + + The largest absolute value norm. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + Supported data types are Double, Single, Complex, and Complex32. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiply elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the full QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by QR factor. This is only used for the managed provider and can be + null for the native provider. The native provider uses the Q portion stored in the R matrix. + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + On entry the B matrix; on exit the X matrix. + The number of columns of B. + On exit, the solution matrix. + Rows must be greater or equal to columns. + The type of QR factorization to perform. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Gets or sets the linear algebra provider. + Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsLAProviderPath` or the default probing paths. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsLAProvider" environment variable, + or fall back to the best provider. + + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + Uses and uses the value of + to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + Uses the value of to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + Uses + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + if set to true , the class is thread safe. + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Random number generator using Mersenne Twister 19937 algorithm. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + Uses the value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + A 32-bit combined multiple recursive generator with 2 components of order 3. + + Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. + + + The type bases upon the implementation in the + Boost Random Number Library. + It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on + Wikipedia - Lagged Fibonacci generator. + + + + + Default value for the ShortLag + + + + + Default value for the LongLag + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The ShortLag value + TheLongLag value + + + + Gets the short lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Gets the long lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Stores an array of random numbers + + + + + Stores an index for the random number array element that will be accessed next. + + + + + Fills the array with new unsigned random numbers. + + + Generated random numbers are 32-bit unsigned integers greater than or equal to 0 + and less than or equal to . + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + This class implements extension methods for the System.Random class. The extension methods generate + pseudo-random distributed numbers for types other than double and int32. + + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random bytes. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers within the specified range. + + The random number generator. + The array to fill with random values. + Lower bound, inclusive. + Upper bound, exclusive. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative random number less than . + + The random number generator. + + A 64-bit signed integer greater than or equal to 0, and less than ; that is, + the range of return values includes 0 but not . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int32 range. + + The random number generator. + + A 32-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int64 range. + + The random number generator. + + A 64-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative decimal floating point random number less than 1.0. + + The random number generator. + + A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, + the range of return values includes 0.0 but not 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random boolean. + + The random number generator. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Provides a time-dependent seed value, matching the default behavior of System.Random. + WARNING: There is no randomness in this seed and quick repeated calls can cause + the same seed value. Do not use for cryptography! + + + + + Provides a seed based on time and unique GUIDs. + WARNING: There is only low randomness in this seed, but at least quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. + WARNING: There is only medium randomness in this seed, but quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Base class for random number generators. This class introduces a layer between + and the Math.Net Numerics random number generators to provide thread safety. + When used directly it use the System.Random as random number source. + + + + + Initializes a new instance of the class using + the value of to set whether + the instance is thread safe or not. + + + + + Initializes a new instance of the class. + + if set to true , the class is thread safe. + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The array to fill with random values. + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The size of the array to fill. + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than . + + + + + Returns a random number less then a specified maximum. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + A 32-bit signed integer less than . + is zero or negative. + + + + Returns a random number within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. + + is greater than . + + + + Fills an array with random 32-bit signed integers greater than or equal to zero and less than . + + The array to fill with random values. + + + + Returns an array with random 32-bit signed integers greater than or equal to zero and less than . + + The size of the array to fill. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . + + + + + Returns an infinite sequence of random numbers within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Fills the elements of a specified array of bytes with random numbers. + + An array of bytes to contain random numbers. + is null. + + + + Returns a random number between 0.0 and 1.0. + + A double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 1982 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: + An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 2006 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". + Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. + Xn = a * Xn−3 + c mod 2^32 + http://www.jstatsoft.org/v08/i14/paper + + + + + The default value for X1. + + + + + The default value for X2. + + + + + The default value for the multiplier. + + + + + The default value for the carry over. + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Seed or last but three unsigned random number. + + + + + Last but two unsigned random number. + + + + + Last but one unsigned random number. + + + + + The value of the carry over. + + + + + The multiplier. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Note: must be less than . + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Xoshiro256** pseudo random number generator. + A random number generator based on the class in the .NET library. + + + This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has + excellent(sub-ns) speed, a state space(256 bits) that is large enough + for any parallel application, and it passes all tests we are aware of. + + For generating just floating-point numbers, xoshiro256+ is even faster. + + The state must be seeded so that it is not everywhere zero.If you have + a 64-bit seed, we suggest to seed a splitmix64 generator and use its + output to fill s. + + For further details see: + David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". + https://arxiv.org/abs/1805.01407 + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Splitmix64 RNG. + + RNG state. This can take any value, including zero. + A new random UInt64. + + Splitmix64 produces equidistributed outputs, thus if a zero is generated then the + next zero will be after a further 2^64 outputs. + + + + + Bisection root-finding algorithm. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Algorithm by Brent, Van Wijngaarden, Dekker et al. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Helper method useful for preventing rounding errors. + a*sign(b) + + + + Algorithm by Broyden. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + Relative step size for calculating the Jacobian matrix at first step. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Helper method to calculate an approximation of the Jacobian. + + The function. + The argument (initial guess). + The result (of initial guess). + Relative step size for calculating the Jacobian. + + + + Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 + Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html + + + + + Q and R are transformed variables. + + + + + n^(1/3) - work around a negative double raised to (1/3) + + + + + Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false + + + Detect a range containing at least one root. + The function to detect roots from. + Lower value of the range. + Upper value of the range + The growing factor of research. Usually 1.6. + Maximum number of iterations. Usually 50. + True if the bracketing operation succeeded, false otherwise. + This iterative methods stops when two values with opposite signs are found. + + + + Sorting algorithms for single, tuple and triple lists. + + + + + Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. + + The type of elements in the key list. + List to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a range of a list of keys, in place using the quick sort algorithm. + + The type of element in the list. + List to sort. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the primary list. + The type of elements in the secondary list. + List to sort. + List to sort on duplicate primary items, and permute the same way as the key list. + Comparison, defining the primary sort order. + Comparison, defining the secondary sort order. + + + + Recursive implementation for an in place quick sort on a list. + + The type of the list on which the quick sort is performed. + The list which is sorted using quick sort. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. + + The type of the list on which the quick sort is performed. + The type of the list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. + + The type of the list on which the quick sort is performed. + The type of the first list which is automatically reordered accordingly. + The type of the second list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The first list which is automatically reordered accordingly. + The second list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. + + The type of the primary list. + The type of the secondary list. + The list which is sorted using quick sort. + The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. + The method with which to compare two elements of the primary list. + The method with which to compare two elements of the secondary list. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Performs an in place swap of two elements in a list. + + The type of elements stored in the list. + The list in which the elements are stored. + The index of the first element of the swap. + The index of the second element of the swap. + + + + This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the error function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. + + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of Airy function Ai + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of the Airy function Ai. + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Ai. + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi(z). + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Computes the logarithm of the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The logarithm of the Euler Beta function evaluated at z,w. + If or are not positive. + + + + Computes the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The Euler Beta function evaluated at z,w. + If or are not positive. + + + + Returns the lower incomplete (unregularized) beta function + B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The lower incomplete (unregularized) beta function. + + + + Returns the regularized lower incomplete beta function + I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The regularized lower incomplete beta function. + + + + ************************************** + COEFFICIENTS FOR METHOD ErfImp * + ************************************** + + Polynomial coefficients for a numerator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a denominator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + + ************************************** + COEFFICIENTS FOR METHOD ErfInvImp * + ************************************** + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Calculates the error function. + The value to evaluate. + the error function evaluated at given value. + + + returns 1 if x == double.PositiveInfinity. + returns -1 if x == double.NegativeInfinity. + + + + + Calculates the complementary error function. + The value to evaluate. + the complementary error function evaluated at given value. + + + returns 0 if x == double.PositiveInfinity. + returns 2 if x == double.NegativeInfinity. + + + + + Calculates the inverse error function evaluated at z. + The inverse error function evaluated at given value. + + + returns double.PositiveInfinity if z >= 1.0. + returns double.NegativeInfinity if z <= -1.0. + + + Calculates the inverse error function evaluated at z. + value to evaluate. + the inverse error function evaluated at Z. + + + + Implementation of the error function. + + Where to evaluate the error function. + Whether to compute 1 - the error function. + the error function. + + + Calculates the complementary inverse error function evaluated at z. + The complementary inverse error function evaluated at given value. + We have tested this implementation against the arbitrary precision mpmath library + and found cases where we can only guarantee 9 significant figures correct. + + returns double.PositiveInfinity if z <= 0.0. + returns double.NegativeInfinity if z >= 2.0. + + + calculates the complementary inverse error function evaluated at z. + value to evaluate. + the complementary inverse error function evaluated at Z. + + + + The implementation of the inverse error function. + + First intermediate parameter. + Second intermediate parameter. + Third intermediate parameter. + the inverse error function. + + + + Computes the generalized Exponential Integral function (En). + + The argument of the Exponential Integral function. + Integer power of the denominator term. Generalization index. + The value of the Exponential Integral function. + + This implementation of the computation of the Exponential Integral function follows the derivation in + "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by + Dover Publications, New York), Chapters 6, 7, and 26. + AND + "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 + + + for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. + for 0 < x <= 1 uses Taylor series expansion + + Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. + + + + + Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up + to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. + + A value value! for value > 0 + + If you need to multiply or divide various such factorials, consider using the logarithmic version + instead so you can add instead of multiply and subtract instead of divide, and + then exponentiate the result using . This will also circumvent the problem that + factorials become very large even for small parameters. + + + + + + Computes the factorial of an integer. + + + + + Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. + + A value value! for value > 0 + + + + Computes the binomial coefficient: n choose k. + + A nonnegative value n. + A nonnegative value h. + The binomial coefficient: n choose k. + + + + Computes the natural logarithm of the binomial coefficient: ln(n choose k). + + A nonnegative value n. + A nonnegative value h. + The logarithmic binomial coefficient: ln(n choose k). + + + + Computes the multinomial coefficient: n choose n1, n2, n3, ... + + A nonnegative value n. + An array of nonnegative values that sum to . + The multinomial coefficient. + if is . + If or any of the are negative. + If the sum of all is not equal to . + + + + The order of the approximation. + + + + + Auxiliary variable when evaluating the function. + + + + + Polynomial coefficients for the approximation. + + + + + Computes the logarithm of the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. + + + + + Computes the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + + Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. + + + + + Returns the upper incomplete regularized gamma function + Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete regularized gamma function. + + + + Returns the upper incomplete gamma function + Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete gamma function. + + + + Returns the lower incomplete gamma function + gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the lower incomplete regularized gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the inverse P^(-1) of the regularized lower incomplete gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, + such that P^(-1)(a,P(a,x)) == x. + + + + + Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. + This implementation is based on + Jose Bernardo + Algorithm AS 103: + Psi ( Digamma ) Function, + Applied Statistics, + Volume 25, Number 3, 1976, pages 315-317. + Using the modifications as in Tom Minka's lightspeed toolbox. + + The argument of the digamma function. + The value of the DiGamma function at . + + + + Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will + only return solutions that are positive. + This implementation is based on the bisection method. + + The argument of the inverse digamma function. + The positive solution to the inverse DiGamma function at . + + + + Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Rising Factorial for x and n + + + + Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Falling Factorial for x and n + + + + A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. + This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation + see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function + + The list of coefficients in the numerator + The list of coefficients in the denominator + The variable in the power series + The value of the Generalized HyperGeometric Function. + + + + Returns the Hankel function of the first kind. + HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the first kind. + + + + Returns the exponentially scaled Hankel function of the first kind. + ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the first kind. + + + + Returns the Hankel function of the second kind. + HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the second kind. + + + + Returns the exponentially scaled Hankel function of the second kind. + ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the second kind. + + + + Computes the 'th Harmonic number. + + The Harmonic number which needs to be computed. + The t'th Harmonic number. + + + + Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) + + The order parameter. + The power parameter. + General Harmonic number. + + + + Returns the Kelvin function of the first kind. + KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function of the first kind. + + + + Returns the Kelvin function ber. + KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function ber. + KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(x) is equivalent to KelvinBer(0, x). + + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function bei. + KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the Kelvin function bei. + KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBei(x) is equivalent to KelvinBei(0, x). + + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the derivative of the Kelvin function ber. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function ber + + + + Returns the derivative of the Kelvin function ber. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ber. + + + + Returns the derivative of the Kelvin function bei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function bei. + + + + Returns the derivative of the Kelvin function bei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function bei. + + + + Returns the Kelvin function of the second kind + KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). + KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + The order of the Kelvin function. + The value to calculate the kelvin function of, + + + + + Returns the Kelvin function ker. + KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function ker. + KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKer(x) is equivalent to KelvinKer(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function kei. + KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the Kelvin function kei. + KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKei(x) is equivalent to KelvinKei(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the derivative of the Kelvin function ker. + + The order of the Kelvin function. + The non-negative real value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function ker. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function kei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Returns the derivative of the Kelvin function kei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic + + The parameter for which to compute the logistic function. + The logistic function of . + + + + Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit + + The parameter for which to compute the logit function. This number should be + between 0 and 1. + The logarithm of divided by 1.0 - . + + + + ************************************** + COEFFICIENTS FOR METHODS bessi0 * + ************************************** + + Chebyshev coefficients for exp(-x) I0(x) + in the interval [0, 8]. + + lim(x->0){ exp(-x) I0(x) } = 1. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I0(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessi1 * + ************************************** + + Chebyshev coefficients for exp(-x) I1(x) / x + in the interval [0, 8]. + + lim(x->0){ exp(-x) I1(x) / x } = 1/2. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I1(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk0, bessk0e * + ************************************** + + Chebyshev coefficients for K0(x) + log(x/2) I0(x) + in the interval [0, 2]. The odd order coefficients are all + zero; only the even order coefficients are listed. + + lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. + + + + Chebyshev coefficients for exp(x) sqrt(x) K0(x) + in the inverted interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk1, bessk1e * + ************************************** + + Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) + in the interval [0, 2]. + + lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. + + + + Chebyshev coefficients for exp(x) sqrt(x) K1(x) + in the interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). + + + + Returns the modified Bessel function of first kind, order 0 of the argument. +

+ The function is defined as i0(x) = j0( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of first kind, + order 1 of the argument. +

+ The function is defined as i1(x) = -i j1( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of the second kind + of order 0 of the argument. +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 0 of the argument. + + The value to compute the Bessel function of. + + + + Returns the modified Bessel function of the second kind + of order 1 of the argument. +

+ The range is partitioned into the two intervals [0, 2] and + (2, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 1 of the argument. +

+ k1e(x) = exp(x) * k1(x). +

+ The value to compute the Bessel function of. + +
+ + + Returns the modified Struve function of order 0. + + The value to compute the function of. + + + + Returns the modified Struve function of order 1. + + The value to compute the function of. + + + + Returns the difference between the Bessel I0 and Struve L0 functions. + + The value to compute the function of. + + + + Returns the difference between the Bessel I1 and Struve L1 functions. + + The value to compute the function of. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Numerically stable exponential minus one, i.e. x -> exp(x)-1 + + A number specifying a power. + Returns exp(power)-1. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Evaluation functions, useful for function approximation. + + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Numerically stable series summation + + provides the summands sequentially + Sum + + + Evaluates the series of Chebyshev polynomials Ti at argument x/2. + The series is given by +
+                  N-1
+                   - '
+            y  =   >   coef[i] T (x/2)
+                   -            i
+                  i=0
+            
+ Coefficients are stored in reverse order, i.e. the zero + order term is last in the array. Note N is the number of + coefficients, not the order. +

+ If coefficients are for the interval a to b, x must + have been transformed to x -> 2(2x - b - a)/(b-a) before + entering the routine. This maps x from (a, b) to (-1, 1), + over which the Chebyshev polynomials are defined. +

+ If the coefficients are for the inverted interval, in + which (a, b) is mapped to (1/b, 1/a), the transformation + required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, + this becomes x -> 4a/x - 1. +

+ SPEED: +

+ Taking advantage of the recurrence properties of the + Chebyshev polynomials, the routine requires one more + addition per loop than evaluating a nested polynomial of + the same degree. +

+ The coefficients of the polynomial. + Argument to the polynomial. + + Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs +

+ Marked as Deprecated in + http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html + + + +

+ Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. + + The no. of terms in the sequence. + The coefficients of the Chebyshev series, length n+1. + The value at which the series is to be evaluated. + + ORIGINAL AUTHOR: + Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND + REFERENCES: + "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" + J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 + +
+ + + Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. + This function has a global minimum at (1,1) with f(1,1) = 0. + Common range: [-5,10] or [-2.048,2.048]. + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Valley-shaped Rosenbrock function for 2 or more dimensions. + This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 + This function has 4 global minima with f(x,y) = 0. + Common range: [-6,6]. + Named after David Mautner Himmelblau + + + https://en.wikipedia.org/wiki/Himmelblau%27s_function + + + + + Rastrigin, a highly multi-modal function with many local minima. + Global minimum of all zeros with f(0) = 0. + Common range: [-5.12,5.12]. + + + https://en.wikipedia.org/wiki/Rastrigin_function + http://www.sfu.ca/~ssurjano/rastr.html + + + + + Drop-Wave, a multi-modal and highly complex function with many local minima. + Global minimum of all zeros with f(0) = -1. + Common range: [-5.12,5.12]. + + + http://www.sfu.ca/~ssurjano/drop.html + + + + + Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. + Global minimum of all zeros with f(0) = 0. + Common range: [-32.768, 32.768]. + + + http://www.sfu.ca/~ssurjano/ackley.html + + + + + Bowl-shaped first Bohachevsky function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-100, 100] + + + http://www.sfu.ca/~ssurjano/boha.html + + + + + Plate-shaped Matyas function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-10, 10]. + + + http://www.sfu.ca/~ssurjano/matya.html + + + + + Valley-shaped six-hump camel back function. + Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). + Common range: x in [-3,3], y in [-2,2]. + + + http://www.sfu.ca/~ssurjano/camel6.html + + + + + Statistics operating on arrays assumed to be unsorted. + WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. + + + + + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + A class with correlation measures between two datasets. + + + + + Auto-correlation function (ACF) based on FFT for all possible lags k. + + Data array to calculate auto correlation for. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. + + The data array to calculate auto correlation for. + Max lag to calculate ACF for must be positive and smaller than x.Length. + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function based on FFT for lags k. + + The data array to calculate auto correlation for. + Array with lags to calculate ACF for. + An array with the ACF as a function of the lags k. + + + + The internal method for calculating the auto-correlation. + + The data array to calculate auto-correlation for + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length + Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length + An array with the ACF as a function of the lags k. + + + + Computes the Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + The Pearson product-moment correlation coefficient. + + + + Computes the Weighted Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + Corresponding weights of data. + The Weighted Pearson product-moment correlation coefficient. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Array of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Enumerable of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Spearman Ranked Correlation coefficient. + + Sample data series A. + Sample data series B. + The Spearman ranked correlation coefficient. + + + + Computes the Spearman Ranked Correlation matrix. + + Array of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the Spearman Ranked Correlation matrix. + + Enumerable of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the basic statistics of data set. The class meets the + NIST standard of accuracy for mean, variance, and standard deviation + (the only statistics they provide exact values for) and exceeds them + in increased accuracy mode. + Recommendation: consider to use RunningStatistics instead. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Gets the size of the sample. + + The size of the sample. + + + + Gets the sample mean. + + The sample mean. + + + + Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). + + The sample variance. + + + + Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). + + The sample standard deviation. + + + + Gets the sample skewness. + + The sample skewness. + Returns zero if is less than three. + + + + Gets the sample kurtosis. + + The sample kurtosis. + Returns zero if is less than four. + + + + Gets the maximum sample value. + + The maximum sample value. + + + + Gets the minimum sample value. + + The minimum sample value. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Internal use. Method use for setting the statistics. + + For setting Mean. + For setting Variance. + For setting Skewness. + For setting Kurtosis. + For setting Minimum. + For setting Maximum. + For setting Count. + + + + A consists of a series of s, + each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + This IComparer performs comparisons between a point and a bucket. + + + + + Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. + + The first bucket to compare. + The second bucket to compare. + -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. + + + + Lower Bound of the Bucket. + + + + + Upper Bound of the Bucket. + + + + + The number of datapoints in the bucket. + + + Value may be NaN if this was constructed as a argument. + + + + + Initializes a new instance of the Bucket class. + + + + + Constructs a Bucket that can be used as an argument for a + like when performing a Binary search. + + Value to look for + + + + Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. + + A cloned Bucket object. + + + + Width of the Bucket. + + + + + True if this is a single point argument for + when performing a Binary search. + + + + + Default comparer. + + + + + This method check whether a point is contained within this bucket. + + The point to check. + + 0 if the point falls within the bucket boundaries; + -1 if the point is smaller than the bucket, + +1 if the point is larger than the bucket. + + + + Comparison of two disjoint buckets. The buckets cannot be overlapping. + + + 0 if UpperBound and LowerBound are bit-for-bit equal + 1 if This bucket is lower that the compared bucket + -1 otherwise + + + + + Checks whether two Buckets are equal. + + + UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a + difference in Count given by . + + + + + Provides a hash code for this bucket. + + + + + Formats a human-readable string for this bucket. + + + + + A class which computes histograms of data. + + + + + Contains all the Buckets of the Histogram. + + + + + Indicates whether the elements of buckets are currently sorted. + + + + + Initializes a new instance of the Histogram class. + + + + + Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram + will be set to the smallest and largest datapoint. + + The data sequence to build a histogram on. + The number of buckets to use. + + + + Constructs a Histogram with a specific number of equally sized buckets. + + The data sequence to build a histogram on. + The number of buckets to use. + The histogram lower bound. + The histogram upper bound. + + + + Add one data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The datapoint which we want to add. + + + + Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The sequence of datapoints which we want to add. + + + + Adds a Bucket to the Histogram. + + + + + Sort the buckets if needed. + + + + + Returns the Bucket that contains the value v. + + The point to search the bucket for. + A copy of the bucket containing point . + + + + Returns the index in the Histogram of the Bucket + that contains the value v. + + The point to search the bucket index for. + The index of the bucket containing the point. + + + + Returns the lower bound of the histogram. + + + + + Returns the upper bound of the histogram. + + + + + Gets the n'th bucket. + + The index of the bucket to be returned. + A copy of the n'th bucket. + + + + Gets the number of buckets. + + + + + Gets the total number of datapoints in the histogram. + + + + + Prints the buckets contained in the . + + + + + Kernel density estimation (KDE). + + + + + Estimate the probability density function of a random variable. + + + The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. + + + + + Estimate the probability density function of a random variable with a Gaussian kernel. + + + + + Estimate the probability density function of a random variable with an Epanechnikov kernel. + The Epanechnikov kernel is optimal in a mean square error sense. + + + + + Estimate the probability density function of a random variable with a uniform kernel. + + + + + Estimate the probability density function of a random variable with a triangular kernel. + + + + + A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). + This kernel is the default. + + + + + Epanechnikov Kernel: + x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 + + + + + Uniform Kernel: + x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 + + + + + Triangular Kernel: + x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 + + + + + A hybrid Monte Carlo sampler for multivariate distributions. + + + + + Number of parameters in the density function. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of different components of the + momentum. + + + + + Gets or sets the standard deviations used in the sampling of different components of the + momentum. + + When the length of pSdv is not the same as Length. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + 1 using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the a random number generator provided by the user. + A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviations + given by pSdv. This constructor will set the burn interval, the method used for + numerical differentiation and the random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + The method used for numerical differentiation. + When the number of burnInterval iteration is negative. + When the length of pSdv is not the same as x0. + + + + Initialize parameters. + + The current location of the sampler. + + + + Checking that the location and the momentum are of the same dimension and that each component is positive. + + The standard deviations used for sampling the momentum. + When the length of pSdv is not the same as Length or if any + component is negative. + When pSdv is null. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the gradient. Uses a simple three point estimation. + + Function which the gradient is to be evaluated. + The location where the gradient is to be evaluated. + The gradient of the function at the point x. + + + + The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set + of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as + a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used + to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler + (). + + The type of samples this sampler produces. + + + + The delegate type that defines a derivative evaluated at a certain point. + + Function to be differentiated. + Value where the derivative is computed. + + + + Evaluates the energy function of the target distribution. + + + + + The current location of the sampler. + + + + + The number of burn iterations between two samples. + + + + + The size of each step in the Hamiltonian equation. + + + + + The number of iterations in the Hamiltonian equation. + + + + + The algorithm used for differentiation. + + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the number of iterations in the Hamiltonian equation. + + When frog leap steps is negative or zero. + + + + Gets or sets the size of each step in the Hamiltonian equation. + + When step size is negative or zero. + + + + Constructs a new Hybrid Monte Carlo sampler. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + Random number generator used for sampling the momentum. + The method used for differentiation. + When the number of burnInterval iteration is negative. + When either x0, pdfLnP or diff is null. + + + + Returns a sample from the distribution P. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Method used to update the sample location. Used in the end of the loop. + + The old energy. + The old gradient/derivative of the energy. + The new sample. + The new gradient/derivative of the energy. + The new energy. + The difference between the old Hamiltonian and new Hamiltonian. Use to determine + if an update should take place. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Method for doing dot product. + + First vector/scalar in the product. + Second vector/scalar in the product. + + + + Method for adding, multiply the second vector/scalar by factor and then + add it to the first vector/scalar. + + First vector/scalar. + Scalar factor multiplying by the second vector/scalar. + Second vector/scalar. + + + + Multiplying the second vector/scalar by factor and then subtract it from + the first vector/scalar. + + First vector/scalar. + Scalar factor to be multiplied to the second vector/scalar. + Second vector/scalar. + + + + Method for sampling a random momentum. + + Momentum to be randomized. + + + + The Hamiltonian equations that is used to produce the new sample. + + + + + Method to compute the Hamiltonian used in the method. + + The momentum. + The energy. + Hamiltonian=E+p.p/2 + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than or equal to zero. + Throws when value is negative. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than to zero. + Throws when value is negative or zero. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than zero. + Throws when value is negative or zero. + + + + Provides utilities to analysis the convergence of a set of samples from + a . + + + + + Computes the auto correlations of a series evaluated by a function f. + + The series for computing the auto correlation. + The lag in the series + The function used to evaluate the series. + The auto correlation. + Throws if lag is zero or if lag is + greater than or equal to the length of Series. + + + + Computes the effective size of the sample when evaluated by a function f. + + The samples. + The function use for evaluating the series. + The effective size when auto correlation is taken into account. + + + + A method which samples datapoints from a proposal distribution. The implementation of this sampler + is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it doesn't take any parameters; it samples random + variables from the whole domain. + + The type of the datapoints. + A sample from the proposal distribution. + + + + A method which samples datapoints from a proposal distribution given an initial sample. The implementation + of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it samples locally around an initial point. In other words, it + makes a small local move rather than producing a global sample from the proposal. + + The type of the datapoints. + The initial sample. + A sample from the proposal distribution. + + + + A function which evaluates a density. + + The type of data the distribution is over. + The sample we want to evaluate the density for. + + + + A function which evaluates a log density. + + The type of data the distribution is over. + The sample we want to evaluate the log density for. + + + + A function which evaluates the log of a transition kernel probability. + + The type for the space over which this transition kernel is defined. + The new state in the transition. + The previous state in the transition. + The log probability of the transition. + + + + The interface which every sampler must implement. + + The type of samples this sampler produces. + + + + The random number generator for this class. + + + + + Keeps track of the number of accepted samples. + + + + + Keeps track of the number of calls to the proposal sampler. + + + + + Initializes a new instance of the class. + + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Gets or sets the random number generator. + + When the random number generator is null. + + + + Returns one sample. + + + + + Returns a number of samples. + + The number of samples we want. + An array of samples. + + + + Gets the acceptance rate of the sampler. + + + + + Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the + proposal distribution Q is symmetric in comparison to . It does need to + be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. + + The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the target distribution. + + + + + Evaluates the log transition probability for the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis-Hastings sampler using the default random number generator. This + constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + The log transition probability for the proposal distribution. + A method that samples from the proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal + distribution Q is symmetric. All densities are required to be in log space. + + The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the sampling distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis sampler using the default random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + A method that samples from the symmetric proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to + to be normalized, but we do need that for each x, P(x) < Q(x). + + The type of samples this sampler produces. + + + + Evaluates the density function of the sampling distribution. + + + + + Evaluates the density function of the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + Constructs a new rejection sampler using the default random number generator. + + The density of the distribution we want to sample from. + The density of the proposal distribution. + A method that samples from the proposal distribution. + + + + Returns a sample from the distribution P. + + When the algorithms detects that the proposal + distribution doesn't upper bound the target distribution. + + + + A hybrid Monte Carlo sampler for univariate distributions. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of the + momentum. + + + + + Gets or sets the standard deviation used in the sampling of the + momentum. + + When standard deviation is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using a random + number generator provided by the user. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + Random number generator used to sample the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + given by pSdv using a random + number generator provided by the user. This constructor will set both the burn interval and the method used for + numerical differentiation. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + The method used for numerical differentiation. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the derivative. Uses a simple three point estimation. + + Function for which the derivative is to be evaluated. + The location where the derivative is to be evaluated. + The derivative of the function at the point x. + + + + Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using + a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. + + The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + + + + Evaluates the log density function of the target distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + The scale of the slice sampler. + + + + + Constructs a new Slice sampler using the default random + number generator. The burn interval will be set to 0. + + The initial sample. + The density of the distribution we want to sample from. + The scale factor of the slice sampler. + When the scale of the slice sampler is not positive. + + + + Constructs a new slice sampler using the default random number generator. It + will set the number of burnInterval iterations and run a burnInterval phase. + + The initial sample. + The density of the distribution we want to sample from. + The number of iterations in between returning samples. + The scale factor of the slice sampler. + When the number of burnInterval iteration is negative. + When the scale of the slice sampler is not positive. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the scale of the slice sampler. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Running statistics over a window of data, allows updating by adding values. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + Replace ties with their mean (non-integer ranks). Default. + + + Replace ties with their minimum (typical sports ranking). + + + Replace ties with their maximum. + + + Permutation with increasing values at each index of ties. + + + + Running statistics accumulator, allows updating by adding values + or by combining two accumulators. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Evaluates the population skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + + + + Evaluates the population kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + + Create a new running statistics over the combined samples of two existing running statistics. + + + + + Statistics operating on an array already sorted ascendingly. + + + + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Extension methods to return basic statistics on set of data. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + The full population data. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + The full population data. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. + Uses a normalizer (Bessel's correction; type 2). + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness and kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + + The full population data. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + The full population data. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + Null-entries are ignored. + + The data sample sequence. + + + + Evaluates the sample mean over a moving window, for each samples. + Returns NaN if no data is empty or if any entry is NaN. + + The sample stream to calculate the mean of. + The number of last samples to consider. + + + + Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. + Can be used in a streaming way, e.g. on large datasets not fitting into memory. + + + + + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Calculates the entropy of a stream of double values. + Returns NaN if any of the values in the stream are NaN. + + The input stream to evaluate. + + + + + Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. + + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The body to be invoked for each iteration range. + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The partition size for splitting work into smaller pieces. + The body to be invoked for each iteration range. + + + + Executes each of the provided actions inside a discrete, asynchronous task. + + An array of actions to execute. + The actions array contains a null element. + At least one invocation of the actions threw an exception. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Double-precision trigonometry toolkit. + + + + + Constant to convert a degree to grad. + + + + + Converts a degree (360-periodic) angle to a grad (400-periodic) angle. + + The degree to convert. + The converted grad angle. + + + + Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. + + The degree to convert. + The converted radian angle. + + + + Converts a grad (400-periodic) angle to a degree (360-periodic) angle. + + The grad to convert. + The converted degree. + + + + Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. + + The grad to convert. + The converted radian. + + + + Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. + + The radian to convert. + The converted degree. + + + + Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. + + The radian to convert. + The converted grad. + + + + Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). + + + + + Trigonometric Sine of an angle in radian, or opposite / hypotenuse. + + The angle in radian. + The sine of the radian angle. + + + + Trigonometric Sine of a Complex number. + + The complex value. + The sine of the complex number. + + + + Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. + + The angle in radian. + The cosine of an angle in radian. + + + + Trigonometric Cosine of a Complex number. + + The complex value. + The cosine of a complex number. + + + + Trigonometric Tangent of an angle in radian, or opposite / adjacent. + + The angle in radian. + The tangent of the radian angle. + + + + Trigonometric Tangent of a Complex number. + + The complex value. + The tangent of the complex number. + + + + Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. + + The angle in radian. + The cotangent of an angle in radian. + + + + Trigonometric Cotangent of a Complex number. + + The complex value. + The cotangent of the complex number. + + + + Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. + + The angle in radian. + The secant of the radian angle. + + + + Trigonometric Secant of a Complex number. + + The complex value. + The secant of the complex number. + + + + Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. + + The angle in radian. + Cosecant of an angle in radian. + + + + Trigonometric Cosecant of a Complex number. + + The complex value. + The cosecant of a complex number. + + + + Trigonometric principal Arc Sine in radian + + The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Sine of this Complex number. + + The complex value. + The arc sine of a complex number. + + + + Trigonometric principal Arc Cosine in radian + + The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Cosine of this Complex number. + + The complex value. + The arc cosine of a complex number. + + + + Trigonometric principal Arc Tangent in radian + + The opposite for a unit adjacent (i.e. opposite / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Tangent of this Complex number. + + The complex value. + The arc tangent of a complex number. + + + + Trigonometric principal Arc Cotangent in radian + + The adjacent for a unit opposite (i.e. adjacent / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cotangent of this Complex number. + + The complex value. + The arc cotangent of a complex number. + + + + Trigonometric principal Arc Secant in radian + + The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Secant of this Complex number. + + The complex value. + The arc secant of a complex number. + + + + Trigonometric principal Arc Cosecant in radian + + The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cosecant of this Complex number. + + The complex value. + The arc cosecant of a complex number. + + + + Hyperbolic Sine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic sine of the angle. + + + + Hyperbolic Sine of a Complex number. + + The complex value. + The hyperbolic sine of a complex number. + + + + Hyperbolic Cosine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic Cosine of the angle. + + + + Hyperbolic Cosine of a Complex number. + + The complex value. + The hyperbolic cosine of a complex number. + + + + Hyperbolic Tangent in radian + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic tangent of the angle. + + + + Hyperbolic Tangent of a Complex number. + + The complex value. + The hyperbolic tangent of a complex number. + + + + Hyperbolic Cotangent + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cotangent of the angle. + + + + Hyperbolic Cotangent of a Complex number. + + The complex value. + The hyperbolic cotangent of a complex number. + + + + Hyperbolic Secant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic secant of the angle. + + + + Hyperbolic Secant of a Complex number. + + The complex value. + The hyperbolic secant of a complex number. + + + + Hyperbolic Cosecant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cosecant of the angle. + + + + Hyperbolic Cosecant of a Complex number. + + The complex value. + The hyperbolic cosecant of a complex number. + + + + Hyperbolic Area Sine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Sine of this Complex number. + + The complex value. + The hyperbolic arc sine of a complex number. + + + + Hyperbolic Area Cosine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosine of this Complex number. + + The complex value. + The hyperbolic arc cosine of a complex number. + + + + Hyperbolic Area Tangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Tangent of this Complex number. + + The complex value. + The hyperbolic arc tangent of a complex number. + + + + Hyperbolic Area Cotangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cotangent of this Complex number. + + The complex value. + The hyperbolic arc cotangent of a complex number. + + + + Hyperbolic Area Secant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Secant of this Complex number. + + The complex value. + The hyperbolic arc secant of a complex number. + + + + Hyperbolic Area Cosecant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosecant of this Complex number. + + The complex value. + The hyperbolic arc cosecant of a complex number. + + + + Hamming window. Named after Richard Hamming. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hamming window. Named after Richard Hamming. + Periodic version, useful e.g. for FFT purposes. + + + + + Hann window. Named after Julius von Hann. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hann window. Named after Julius von Hann. + Periodic version, useful e.g. for FFT purposes. + + + + + Cosine window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Cosine window. + Periodic version, useful e.g. for FFT purposes. + + + + + Lanczos window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Lanczos window. + Periodic version, useful e.g. for FFT purposes. + + + + + Gauss window. + + + + + Blackman window. + + + + + Blackman-Harris window. + + + + + Blackman-Nuttall window. + + + + + Bartlett window. + + + + + Bartlett-Hann window. + + + + + Nuttall window. + + + + + Flat top window. + + + + + Uniform rectangular (Dirichlet) window. + + + + + Triangular window. + + + + + Tukey tapering window. A rectangular window bounded + by half a cosine window on each side. + + Width of the window + Fraction of the window occupied by the cosine parts + +
+
diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.dll b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.dll new file mode 100755 index 0000000..68dad64 Binary files /dev/null and b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.dll differ diff --git a/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.xml b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.xml new file mode 100755 index 0000000..5f9e8af --- /dev/null +++ b/oscardata/packages/MathNet.Numerics.4.12.0/lib/netstandard2.0/MathNet.Numerics.xml @@ -0,0 +1,57152 @@ + + + + MathNet.Numerics + + + + + Useful extension methods for Arrays. + + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Copies the values from on array to another. + + The source array. + The destination array. + + + + Enumerative Combinatorics and Counting. + + + + + Count the number of possible variations without repetition. + The order matters and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of distinct variations. + + + + Count the number of possible variations with repetition. + The order matters and each object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of distinct variations with repetition. + + + + Count the number of possible combinations without repetition. + The order does not matter and each object can be chosen only once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + Maximum number of combinations. + + + + Count the number of possible combinations with repetition. + The order does not matter and an object can be chosen more than once. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen 0, 1 or multiple times. + Maximum number of combinations with repetition. + + + + Count the number of possible permutations (without repetition). + + Number of (distinguishable) elements in the set. + Maximum number of permutations without repetition. + + + + Generate a random permutation, without repetition, by generating the index numbers 0 to N-1 and shuffle them randomly. + Implemented using Fisher-Yates Shuffling. + + An array of length N that contains (in any order) the integers of the interval [0, N). + Number of (distinguishable) elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation, without repetition, from a data array by reordering the provided array in-place. + Implemented using Fisher-Yates Shuffling. The provided data array will be modified. + + The data array to be reordered. The array will be modified by this routine. + The random number generator to use. Optional; the default random source will be used if null. + + + + Select a random permutation from a data sequence by returning the provided data in random order. + Implemented using Fisher-Yates Shuffling. + + The data elements to be reordered. + The random number generator to use. Optional; the default random source will be used if null. + + + + Generate a random combination, without repetition, by randomly selecting some of N elements. + + Number of elements in the set. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Generate a random combination, without repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + Boolean mask array of length N, for each item true if it is selected. + + + + Select a random combination, without repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination, in the original order. + + + + Generates a random combination, with repetition, by randomly selecting k of N elements. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + Integer mask array of length N, for each item the number of times it was selected. + + + + Select a random combination, with repetition, from a data sequence by selecting k elements in original order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen combination with repetition, in the original order. + + + + Generate a random variation, without repetition, by randomly selecting k of n elements with order. + Implemented using partial Fisher-Yates Shuffling. + + Number of elements in the set. + Number of elements to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, without repetition, from a data sequence by randomly selecting k elements in random order. + Implemented using partial Fisher-Yates Shuffling. + + The data source to choose from. + Number of elements (k) to choose from the set. Each element is chosen at most once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation, in random order. + + + + Generate a random variation, with repetition, by randomly selecting k of n elements with order. + + Number of elements in the set. + Number of elements to choose from the set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + An array of length K that contains the indices of the selections as integers of the interval [0, N). + + + + Select a random variation, with repetition, from a data sequence by randomly selecting k elements in random order. + + The data source to choose from. + Number of elements (k) to choose from the data set. Elements can be chosen more than once. + The random number generator to use. Optional; the default random source will be used if null. + The chosen variation with repetition, in random order. + + + + 32-bit single precision complex numbers class. + + + + The class Complex32 provides all elementary operations + on complex numbers. All the operators +, -, + *, /, ==, != are defined in the + canonical way. Additional complex trigonometric functions + are also provided. Note that the Complex32 structures + has two special constant values and + . + + + + Complex32 x = new Complex32(1f,2f); + Complex32 y = Complex32.FromPolarCoordinates(1f, Math.Pi); + Complex32 z = (x + y) / (x - y); + + + + For mathematical details about complex numbers, please + have a look at the + Wikipedia + + + + + + The real component of the complex number. + + + + + The imaginary component of the complex number. + + + + + Initializes a new instance of the Complex32 structure with the given real + and imaginary parts. + + The value for the real component. + The value for the imaginary component. + + + + Creates a complex number from a point's polar coordinates. + + A complex number. + The magnitude, which is the distance from the origin (the intersection of the x-axis and the y-axis) to the number. + The phase, which is the angle from the line to the horizontal axis, measured in radians. + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to one and an imaginary number equal to zero. + + + + + Returns a new instance + with a real number equal to zero and an imaginary number equal to one. + + + + + Returns a new instance + with real and imaginary numbers positive infinite. + + + + + Returns a new instance + with real and imaginary numbers not a number. + + + + + Gets the real component of the complex number. + + The real component of the complex number. + + + + Gets the real imaginary component of the complex number. + + The real imaginary component of the complex number. + + + + Gets the phase or argument of this Complex32. + + + Phase always returns a value bigger than negative Pi and + smaller or equal to Pi. If this Complex32 is zero, the Complex32 + is assumed to be positive real with an argument of zero. + + The phase or argument of this Complex32 + + + + Gets the magnitude (or absolute value) of a complex number. + + Assuming that magnitude of (inf,a) and (a,inf) and (inf,inf) is inf and (NaN,a), (a,NaN) and (NaN,NaN) is NaN + The magnitude of the current instance. + + + + Gets the squared magnitude (or squared absolute value) of a complex number. + + The squared magnitude of the current instance. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex32. + + + + Gets a value indicating whether the Complex32 is zero. + + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + + true if this instance is ; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + + true if this instance is real nonnegative number; otherwise, false. + + + + + Exponential of this Complex32 (exp(x), E^x). + + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex32 (Base E). + + The natural logarithm of this complex number. + + + + Common Logarithm of this Complex32 (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex32 with custom base. + + The logarithm of this complex number. + + + + Raise this Complex32 to the given value. + + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex32 to the inverse of the given value. + + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex32 + + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex32 + + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex32. + + + + + Evaluate all cubic roots of this Complex32. + + + + + Equality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real and imaginary components of the two complex numbers are equal; false otherwise. + + + + Inequality test. + + One of complex numbers to compare. + The other complex numbers to compare. + true if the real or imaginary components of the two complex numbers are not equal; false otherwise. + + + + Unary addition. + + The complex number to operate on. + Returns the same complex number. + + + + Unary minus. + + The complex number to operate on. + The negated value of the . + + + Addition operator. Adds two complex numbers together. + The result of the addition. + One of the complex numbers to add. + The other complex numbers to add. + + + Subtraction operator. Subtracts two complex numbers. + The result of the subtraction. + The complex number to subtract from. + The complex number to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The complex numbers to add. + The float value to add. + + + Subtraction operator. Subtracts float value from a complex value. + The result of the subtraction. + The complex number to subtract from. + The float value to subtract. + + + Addition operator. Adds a complex number and float together. + The result of the addition. + The float value to add. + The complex numbers to add. + + + Subtraction operator. Subtracts complex value from a float value. + The result of the subtraction. + The float vale to subtract from. + The complex value to subtract. + + + Multiplication operator. Multiplies two complex numbers. + The result of the multiplication. + One of the complex numbers to multiply. + The other complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The float value to multiply. + The complex number to multiply. + + + Multiplication operator. Multiplies a complex number with a float value. + The result of the multiplication. + The complex number to multiply. + The float value to multiply. + + + Division operator. Divides a complex number by another. + Enhanced Smith's algorithm for dividing two complex numbers + + The result of the division. + The dividend. + The divisor. + + + + Helper method for dividing. + + Re first + Im first + Re second + Im second + + + + + Division operator. Divides a float value by a complex number. + Algorithm based on Smith's algorithm + + The result of the division. + The dividend. + The divisor. + + + Division operator. Divides a complex number by a float value. + The result of the division. + The dividend. + The divisor. + + + + Computes the conjugate of a complex number and returns the result. + + + + + Returns the multiplicative inverse of a complex number. + + + + + Converts the value of the current complex number to its equivalent string representation in Cartesian form. + + The string representation of the current instance in Cartesian form. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format for its real and imaginary parts. + + The string representation of the current instance in Cartesian form. + A standard or custom numeric format string. + + is not a valid format string. + + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified culture-specific formatting information. + + The string representation of the current instance in Cartesian form, as specified by . + An object that supplies culture-specific formatting information. + + + Converts the value of the current complex number to its equivalent string representation + in Cartesian form by using the specified format and culture-specific format information for its real and imaginary parts. + The string representation of the current instance in Cartesian form, as specified by and . + A standard or custom numeric format string. + An object that supplies culture-specific formatting information. + + is not a valid format string. + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + The hash code for the complex number. + + + The hash code of the complex number. + + + The hash code is calculated as + System.Math.Exp(ComplexMath.Absolute(complexNumber)). + + + + + Checks if two complex numbers are equal. Two complex numbers are equal if their + corresponding real and imaginary components are equal. + + + Returns true if the two objects are the same object, or if their corresponding + real and imaginary components are equal, false otherwise. + + + The complex number to compare to with. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a float. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as float. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Explicit conversion of a real decimal to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Explicit conversion of a Complex to a Complex32. + + The decimal value to convert. + The result of the conversion. + + + + Implicit conversion of a real byte to a Complex32. + + The byte value to convert. + The result of the conversion. + + + + Implicit conversion of a real short to a Complex32. + + The short value to convert. + The result of the conversion. + + + + Implicit conversion of a signed byte to a Complex32. + + The signed byte value to convert. + The result of the conversion. + + + + Implicit conversion of a unsigned real short to a Complex32. + + The unsigned short value to convert. + The result of the conversion. + + + + Implicit conversion of a real int to a Complex32. + + The int value to convert. + The result of the conversion. + + + + Implicit conversion of a BigInteger int to a Complex32. + + The BigInteger value to convert. + The result of the conversion. + + + + Implicit conversion of a real long to a Complex32. + + The long value to convert. + The result of the conversion. + + + + Implicit conversion of a real uint to a Complex32. + + The uint value to convert. + The result of the conversion. + + + + Implicit conversion of a real ulong to a Complex32. + + The ulong value to convert. + The result of the conversion. + + + + Implicit conversion of a real float to a Complex32. + + The float value to convert. + The result of the conversion. + + + + Implicit conversion of a real double to a Complex32. + + The double value to convert. + The result of the conversion. + + + + Converts this Complex32 to a . + + A with the same values as this Complex32. + + + + Returns the additive inverse of a specified complex number. + + The result of the real and imaginary components of the value parameter multiplied by -1. + A complex number. + + + + Computes the conjugate of a complex number and returns the result. + + The conjugate of . + A complex number. + + + + Adds two complex numbers and returns the result. + + The sum of and . + The first complex number to add. + The second complex number to add. + + + + Subtracts one complex number from another and returns the result. + + The result of subtracting from . + The value to subtract from (the minuend). + The value to subtract (the subtrahend). + + + + Returns the product of two complex numbers. + + The product of the and parameters. + The first complex number to multiply. + The second complex number to multiply. + + + + Divides one complex number by another and returns the result. + + The quotient of the division. + The complex number to be divided. + The complex number to divide by. + + + + Returns the multiplicative inverse of a complex number. + + The reciprocal of . + A complex number. + + + + Returns the square root of a specified complex number. + + The square root of . + A complex number. + + + + Gets the absolute value (or magnitude) of a complex number. + + The absolute value of . + A complex number. + + + + Returns e raised to the power specified by a complex number. + + The number e raised to the power . + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a complex number. + + The complex number raised to the power . + A complex number to be raised to a power. + A complex number that specifies a power. + + + + Returns a specified complex number raised to a power specified by a single-precision floating-point number. + + The complex number raised to the power . + A complex number to be raised to a power. + A single-precision floating-point number that specifies a power. + + + + Returns the natural (base e) logarithm of a specified complex number. + + The natural (base e) logarithm of . + A complex number. + + + + Returns the logarithm of a specified complex number in a specified base. + + The logarithm of in base . + A complex number. + The base of the logarithm. + + + + Returns the base-10 logarithm of a specified complex number. + + The base-10 logarithm of . + A complex number. + + + + Returns the sine of the specified complex number. + + The sine of . + A complex number. + + + + Returns the cosine of the specified complex number. + + The cosine of . + A complex number. + + + + Returns the tangent of the specified complex number. + + The tangent of . + A complex number. + + + + Returns the angle that is the arc sine of the specified complex number. + + The angle which is the arc sine of . + A complex number. + + + + Returns the angle that is the arc cosine of the specified complex number. + + The angle, measured in radians, which is the arc cosine of . + A complex number that represents a cosine. + + + + Returns the angle that is the arc tangent of the specified complex number. + + The angle that is the arc tangent of . + A complex number. + + + + Returns the hyperbolic sine of the specified complex number. + + The hyperbolic sine of . + A complex number. + + + + Returns the hyperbolic cosine of the specified complex number. + + The hyperbolic cosine of . + A complex number. + + + + Returns the hyperbolic tangent of the specified complex number. + + The hyperbolic tangent of . + A complex number. + + + + Extension methods for the Complex type provided by System.Numerics + + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the squared magnitude of the Complex number. + + The number to perform this operation on. + The squared magnitude of the Complex number. + + + + Gets the unity of this complex (same argument, but on the unit circle; exp(I*arg)) + + The unity of this Complex. + + + + Gets the conjugate of the Complex number. + + The number to perform this operation on. + + The semantic of setting the conjugate is such that + + // a, b of type Complex32 + a.Conjugate = b; + + is equivalent to + + // a, b of type Complex32 + a = b.Conjugate + + + The conjugate of the number. + + + + Returns the multiplicative inverse of a complex number. + + + + + Exponential of this Complex (exp(x), E^x). + + The number to perform this operation on. + + The exponential of this complex number. + + + + + Natural Logarithm of this Complex (Base E). + + The number to perform this operation on. + + The natural logarithm of this complex number. + + + + + Common Logarithm of this Complex (Base 10). + + The common logarithm of this complex number. + + + + Logarithm of this Complex with custom base. + + The logarithm of this complex number. + + + + Raise this Complex to the given value. + + The number to perform this operation on. + + The exponent. + + + The complex number raised to the given exponent. + + + + + Raise this Complex to the inverse of the given value. + + The number to perform this operation on. + + The root exponent. + + + The complex raised to the inverse of the given exponent. + + + + + The Square (power 2) of this Complex + + The number to perform this operation on. + + The square of this complex number. + + + + + The Square Root (power 1/2) of this Complex + + The number to perform this operation on. + + The square root of this complex number. + + + + + Evaluate all square roots of this Complex. + + + + + Evaluate all cubic roots of this Complex. + + + + + Gets a value indicating whether the Complex32 is zero. + + The number to perform this operation on. + true if this instance is zero; otherwise, false. + + + + Gets a value indicating whether the Complex32 is one. + + The number to perform this operation on. + true if this instance is one; otherwise, false. + + + + Gets a value indicating whether the Complex32 is the imaginary unit. + + true if this instance is ImaginaryOne; otherwise, false. + The number to perform this operation on. + + + + Gets a value indicating whether the provided Complex32evaluates + to a value that is not a number. + + The number to perform this operation on. + + true if this instance is NaN; otherwise, + false. + + + + + Gets a value indicating whether the provided Complex32 evaluates to an + infinite value. + + The number to perform this operation on. + + true if this instance is infinite; otherwise, false. + + + True if it either evaluates to a complex infinity + or to a directed infinity. + + + + + Gets a value indicating whether the provided Complex32 is real. + + The number to perform this operation on. + true if this instance is a real number; otherwise, false. + + + + Gets a value indicating whether the provided Complex32 is real and not negative, that is >= 0. + + The number to perform this operation on. + + true if this instance is real nonnegative number; otherwise, false. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + The string to parse. + + + + + Creates a complex number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Parse a part (real or complex) from a complex number. + + Start Token. + Is set to true if the part identified itself as being imaginary. + + An that supplies culture-specific + formatting information. + + Resulting part as double. + + + + + Converts the string representation of a complex number to a double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to double-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + + + Creates a Complex32 number based on a string. The string can be in the + following formats (without the quotes): 'n', 'ni', 'n +/- ni', + 'ni +/- n', 'n,n', 'n,ni,' '(n,n)', or '(n,ni)', where n is a double. + + + A complex number containing the value specified by the given string. + + + the string to parse. + + + An that supplies culture-specific + formatting information. + + + + + Converts the string representation of a complex number to a single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain complex32.Zero. This parameter is passed uninitialized. + + + + + Converts the string representation of a complex number to single-precision complex number equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex number to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will contain Complex.Zero. This parameter is passed uninitialized. + + + + + A collection of frequently used mathematical constants. + + + + The number e + + + The number log[2](e) + + + The number log[10](e) + + + The number log[e](2) + + + The number log[e](10) + + + The number log[e](pi) + + + The number log[e](2*pi)/2 + + + The number 1/e + + + The number sqrt(e) + + + The number sqrt(2) + + + The number sqrt(3) + + + The number sqrt(1/2) = 1/sqrt(2) = sqrt(2)/2 + + + The number sqrt(3)/2 + + + The number pi + + + The number pi*2 + + + The number pi/2 + + + The number pi*3/2 + + + The number pi/4 + + + The number sqrt(pi) + + + The number sqrt(2pi) + + + The number sqrt(pi/2) + + + The number sqrt(2*pi*e) + + + The number log(sqrt(2*pi)) + + + The number log(sqrt(2*pi*e)) + + + The number log(2 * sqrt(e / pi)) + + + The number 1/pi + + + The number 2/pi + + + The number 1/sqrt(pi) + + + The number 1/sqrt(2pi) + + + The number 2/sqrt(pi) + + + The number 2 * sqrt(e / pi) + + + The number (pi)/180 - factor to convert from Degree (deg) to Radians (rad). + + + + + The number (pi)/200 - factor to convert from NewGrad (grad) to Radians (rad). + + + + + The number ln(10)/20 - factor to convert from Power Decibel (dB) to Neper (Np). Use this version when the Decibel represent a power gain but the compared values are not powers (e.g. amplitude, current, voltage). + + + The number ln(10)/10 - factor to convert from Neutral Decibel (dB) to Neper (Np). Use this version when either both or neither of the Decibel and the compared values represent powers. + + + The Catalan constant + Sum(k=0 -> inf){ (-1)^k/(2*k + 1)2 } + + + The Euler-Mascheroni constant + lim(n -> inf){ Sum(k=1 -> n) { 1/k - log(n) } } + + + The number (1+sqrt(5))/2, also known as the golden ratio + + + The Glaisher constant + e^(1/12 - Zeta(-1)) + + + The Khinchin constant + prod(k=1 -> inf){1+1/(k*(k+2))^log(k,2)} + + + + The size of a double in bytes. + + + + + The size of an int in bytes. + + + + + The size of a float in bytes. + + + + + The size of a Complex in bytes. + + + + + The size of a Complex in bytes. + + + + Speed of Light in Vacuum: c_0 = 2.99792458e8 [m s^-1] (defined, exact; 2007 CODATA) + + + Magnetic Permeability in Vacuum: mu_0 = 4*Pi * 10^-7 [N A^-2 = kg m A^-2 s^-2] (defined, exact; 2007 CODATA) + + + Electric Permittivity in Vacuum: epsilon_0 = 1/(mu_0*c_0^2) [F m^-1 = A^2 s^4 kg^-1 m^-3] (defined, exact; 2007 CODATA) + + + Characteristic Impedance of Vacuum: Z_0 = mu_0*c_0 [Ohm = m^2 kg s^-3 A^-2] (defined, exact; 2007 CODATA) + + + Newtonian Constant of Gravitation: G = 6.67429e-11 [m^3 kg^-1 s^-2] (2007 CODATA) + + + Planck's constant: h = 6.62606896e-34 [J s = m^2 kg s^-1] (2007 CODATA) + + + Reduced Planck's constant: h_bar = h / (2*Pi) [J s = m^2 kg s^-1] (2007 CODATA) + + + Planck mass: m_p = (h_bar*c_0/G)^(1/2) [kg] (2007 CODATA) + + + Planck temperature: T_p = (h_bar*c_0^5/G)^(1/2)/k [K] (2007 CODATA) + + + Planck length: l_p = h_bar/(m_p*c_0) [m] (2007 CODATA) + + + Planck time: t_p = l_p/c_0 [s] (2007 CODATA) + + + Elementary Electron Charge: e = 1.602176487e-19 [C = A s] (2007 CODATA) + + + Magnetic Flux Quantum: theta_0 = h/(2*e) [Wb = m^2 kg s^-2 A^-1] (2007 CODATA) + + + Conductance Quantum: G_0 = 2*e^2/h [S = m^-2 kg^-1 s^3 A^2] (2007 CODATA) + + + Josephson Constant: K_J = 2*e/h [Hz V^-1] (2007 CODATA) + + + Von Klitzing Constant: R_K = h/e^2 [Ohm = m^2 kg s^-3 A^-2] (2007 CODATA) + + + Bohr Magneton: mu_B = e*h_bar/2*m_e [J T^-1] (2007 CODATA) + + + Nuclear Magneton: mu_N = e*h_bar/2*m_p [J T^-1] (2007 CODATA) + + + Fine Structure Constant: alpha = e^2/4*Pi*e_0*h_bar*c_0 [1] (2007 CODATA) + + + Rydberg Constant: R_infty = alpha^2*m_e*c_0/2*h [m^-1] (2007 CODATA) + + + Bor Radius: a_0 = alpha/4*Pi*R_infty [m] (2007 CODATA) + + + Hartree Energy: E_h = 2*R_infty*h*c_0 [J] (2007 CODATA) + + + Quantum of Circulation: h/2*m_e [m^2 s^-1] (2007 CODATA) + + + Fermi Coupling Constant: G_F/(h_bar*c_0)^3 [GeV^-2] (2007 CODATA) + + + Weak Mixin Angle: sin^2(theta_W) [1] (2007 CODATA) + + + Electron Mass: [kg] (2007 CODATA) + + + Electron Mass Energy Equivalent: [J] (2007 CODATA) + + + Electron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Electron Compton Wavelength: [m] (2007 CODATA) + + + Classical Electron Radius: [m] (2007 CODATA) + + + Thomson Cross Section: [m^2] (2002 CODATA) + + + Electron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Electon G-Factor: [1] (2007 CODATA) + + + Muon Mass: [kg] (2007 CODATA) + + + Muon Mass Energy Equivalent: [J] (2007 CODATA) + + + Muon Molar Mass: [kg mol^-1] (2007 CODATA) + + + Muon Compton Wavelength: [m] (2007 CODATA) + + + Muon Magnetic Moment: [J T^-1] (2007 CODATA) + + + Muon G-Factor: [1] (2007 CODATA) + + + Tau Mass: [kg] (2007 CODATA) + + + Tau Mass Energy Equivalent: [J] (2007 CODATA) + + + Tau Molar Mass: [kg mol^-1] (2007 CODATA) + + + Tau Compton Wavelength: [m] (2007 CODATA) + + + Proton Mass: [kg] (2007 CODATA) + + + Proton Mass Energy Equivalent: [J] (2007 CODATA) + + + Proton Molar Mass: [kg mol^-1] (2007 CODATA) + + + Proton Compton Wavelength: [m] (2007 CODATA) + + + Proton Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton G-Factor: [1] (2007 CODATA) + + + Proton Shielded Magnetic Moment: [J T^-1] (2007 CODATA) + + + Proton Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Proton Shielded Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Neutron Mass: [kg] (2007 CODATA) + + + Neutron Mass Energy Equivalent: [J] (2007 CODATA) + + + Neutron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Neuron Compton Wavelength: [m] (2007 CODATA) + + + Neutron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Neutron G-Factor: [1] (2007 CODATA) + + + Neutron Gyro-Magnetic Ratio: [s^-1 T^-1] (2007 CODATA) + + + Deuteron Mass: [kg] (2007 CODATA) + + + Deuteron Mass Energy Equivalent: [J] (2007 CODATA) + + + Deuteron Molar Mass: [kg mol^-1] (2007 CODATA) + + + Deuteron Magnetic Moment: [J T^-1] (2007 CODATA) + + + Helion Mass: [kg] (2007 CODATA) + + + Helion Mass Energy Equivalent: [J] (2007 CODATA) + + + Helion Molar Mass: [kg mol^-1] (2007 CODATA) + + + Avogadro constant: [mol^-1] (2010 CODATA) + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 000 + + + The SI prefix factor corresponding to 1 000 000 + + + The SI prefix factor corresponding to 1 000 + + + The SI prefix factor corresponding to 100 + + + The SI prefix factor corresponding to 10 + + + The SI prefix factor corresponding to 0.1 + + + The SI prefix factor corresponding to 0.01 + + + The SI prefix factor corresponding to 0.001 + + + The SI prefix factor corresponding to 0.000 001 + + + The SI prefix factor corresponding to 0.000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 001 + + + The SI prefix factor corresponding to 0.000 000 000 000 000 000 000 001 + + + + Sets parameters for the library. + + + + + Use a specific provider if configured, e.g. using + environment variables, or fall back to the best providers. + + + + + Use the best provider available. + + + + + Use the Intel MKL native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Use the Intel MKL native provider for linear algebra, with the specified configuration parameters. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the Intel MKL native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Use the Nvidia CUDA native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the Nvidia CUDA native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Use the OpenBLAS native provider for linear algebra. + Throws if it is not available or failed to initialize, in which case the previous provider is still active. + + + + + Try to use the OpenBLAS native provider for linear algebra. + + + True if the provider was found and initialized successfully. + False if it failed and the previous provider is still active. + + + + + Try to use any available native provider in an undefined order. + + + True if one of the native providers was found and successfully initialized. + False if it failed and the previous provider is still active. + + + + + Gets or sets a value indicating whether the distribution classes check validate each parameter. + For the multivariate distributions this could involve an expensive matrix factorization. + The default setting of this property is true. + + + + + Gets or sets a value indicating whether to use thread safe random number generators (RNG). + Thread safe RNG about two and half time slower than non-thread safe RNG. + + + true to use thread safe random number generators ; otherwise, false. + + + + + Optional path to try to load native provider binaries from. + + + + + Gets or sets a value indicating how many parallel worker threads shall be used + when parallelization is applicable. + + Default to the number of processor cores, must be between 1 and 1024 (inclusive). + + + + Gets or sets the TaskScheduler used to schedule the worker tasks. + + + + + Gets or sets the order of the matrix when linear algebra provider + must calculate multiply in parallel threads. + + The order. Default 64, must be at least 3. + + + + Gets or sets the number of elements a vector or matrix + must contain before we multiply threads. + + Number of elements. Default 300, must be at least 3. + + + + Numerical Derivative. + + + + + Initialized a NumericalDerivative with the given points and center. + + + + + Initialized a NumericalDerivative with the default points and center for the given order. + + + + + Evaluates the derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + Derivative order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Univariate function handle. + Derivative order. + + + + Evaluates the first derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the first derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the second derivative of a scalar univariate function. + + Univariate function handle. + Point at which to evaluate the derivative. + + + + Creates a function handle for the second derivative of a scalar univariate function. + + Univariate function handle. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a multivariate function. + + Multivariate function handle. + Index of independent variable for partial derivative. + + + + Evaluates the partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + + + + Creates a function handle for the partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + Derivative order. + + + + Evaluates the first partial derivative of a bivariate function. + + Bivariate function handle. + First argument at which to evaluate the derivative. + Second argument at which to evaluate the derivative. + Index of independent variable for partial derivative. + + + + Creates a function handle for the first partial derivative of a bivariate function. + + Bivariate function handle. + Index of independent variable for partial derivative. + + + + Class to calculate finite difference coefficients using Taylor series expansion method. + + + For n points, coefficients are calculated up to the maximum derivative order possible (n-1). + The current function value position specifies the "center" for surrounding coefficients. + Selecting the first, middle or last positions represent forward, backwards and central difference methods. + + + + + + + Number of points for finite difference coefficients. Changing this value recalculates the coefficients table. + + + + + Initializes a new instance of the class. + + Number of finite difference coefficients. + + + + Gets the finite difference coefficients for a specified center and order. + + Current function position with respect to coefficients. Must be within point range. + Order of finite difference coefficients. + Vector of finite difference coefficients. + + + + Gets the finite difference coefficients for all orders at a specified center. + + Current function position with respect to coefficients. Must be within point range. + Rectangular array of coefficients, with columns specifying order. + + + + Type of finite different step size. + + + + + The absolute step size value will be used in numerical derivatives, regardless of order or function parameters. + + + + + A base step size value, h, will be scaled according to the function input parameter. A common example is hx = h*(1+abs(x)), however + this may vary depending on implementation. This definition only guarantees that the only scaling will be relative to the + function input parameter and not the order of the finite difference derivative. + + + + + A base step size value, eps (typically machine precision), is scaled according to the finite difference coefficient order + and function input parameter. The initial scaling according to finite different coefficient order can be thought of as producing a + base step size, h, that is equivalent to scaling. This step size is then scaled according to the function + input parameter. Although implementation may vary, an example of second order accurate scaling may be (eps)^(1/3)*(1+abs(x)). + + + + + Class to evaluate the numerical derivative of a function using finite difference approximations. + Variable point and center methods can be initialized . + This class can also be used to return function handles (delegates) for a fixed derivative order and variable. + It is possible to evaluate the derivative and partial derivative of univariate and multivariate functions respectively. + + + + + Initializes a NumericalDerivative class with the default 3 point center difference method. + + + + + Initialized a NumericalDerivative class. + + Number of points for finite difference derivatives. + Location of the center with respect to other points. Value ranges from zero to points-1. + + + + Sets and gets the finite difference step size. This value is for each function evaluation if relative step size types are used. + If the base step size used in scaling is desired, see . + + + Setting then getting the StepSize may return a different value. This is not unusual since a user-defined step size is converted to a + base-2 representable number to improve finite difference accuracy. + + + + + Sets and gets the base finite difference step size. This assigned value to this parameter is only used if is set to RelativeX. + However, if the StepType is Relative, it will contain the base step size computed from based on the finite difference order. + + + + + Sets and gets the base finite difference step size. This parameter is only used if is set to Relative. + By default this is set to machine epsilon, from which is computed. + + + + + Sets and gets the location of the center point for the finite difference derivative. + + + + + Number of times a function is evaluated for numerical derivatives. + + + + + Type of step size for computing finite differences. If set to absolute, dx = h. + If set to relative, dx = (1+abs(x))*h^(2/(order+1)). This provides accurate results when + h is approximately equal to the square-root of machine accuracy, epsilon. + + + + + Evaluates the derivative of equidistant points using the finite difference method. + + Vector of points StepSize apart. + Derivative order. + Finite difference step size. + Derivative of points of the specified order. + + + + Evaluates the derivative of a scalar univariate function. + + + Supplying the optional argument currentValue will reduce the number of function evaluations + required to calculate the finite difference derivative. + + Function handle. + Point at which to compute the derivative. + Derivative order. + Current function value at center. + Function derivative at x of the specified order. + + + + Creates a function handle for the derivative of a scalar univariate function. + + Input function handle. + Derivative order. + Function handle that evaluates the derivative of input function at a fixed order. + + + + Evaluates the partial derivative of a multivariate function. + + Multivariate function handle. + Vector at which to evaluate the derivative. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Function partial derivative at x of the specified order. + + + + Evaluates the partial derivatives of a multivariate function array. + + + This function assumes the input vector x is of the correct length for f. + + Multivariate vector function array handle. + Vector at which to evaluate the derivatives. + Index of independent variable for partial derivative. + Derivative order. + Current function value at center. + Vector of functions partial derivatives at x of the specified order. + + + + Creates a function handle for the partial derivative of a multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at a fixed order. + + + + Creates a function handle for the partial derivative of a vector multivariate function. + + Input function handle. + Index of the independent variable for partial derivative. + Derivative order. + Function handle that evaluates partial derivative of input function at fixed order. + + + + Evaluates the mixed partial derivative of variable order for multivariate functions. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function handle. + Points at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivative at x of the specified order. + + + + Evaluates the mixed partial derivative of variable order for multivariate function arrays. + + + This function recursively uses to evaluate mixed partial derivative. + Therefore, it is more efficient to call for higher order derivatives of + a single independent variable. + + Multivariate function array handle. + Vector at which to evaluate the derivative. + Vector of indices for the independent variables at descending derivative orders. + Highest order of differentiation. + Current function value at center. + Function mixed partial derivatives at x of the specified order. + + + + Creates a function handle for the mixed partial derivative of a multivariate function. + + Input function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Creates a function handle for the mixed partial derivative of a multivariate vector function. + + Input vector function handle. + Vector of indices for the independent variables at descending derivative orders. + Highest derivative order. + Function handle that evaluates the fixed mixed partial derivative of input function at fixed order. + + + + Resets the evaluation counter. + + + + + Class for evaluating the Hessian of a smooth continuously differentiable function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Hessian object with a three point central difference method. + + + + + Creates a numerical Hessian with a specified differentiation scheme. + + Number of points for Hessian evaluation. + Center point for differentiation. + + + + Evaluates the Hessian of the scalar univariate function f at points x. + + Scalar univariate function handle. + Point at which to evaluate Hessian. + Hessian tensor. + + + + Evaluates the Hessian of a multivariate function f at points x. + + + This method of computing the Hessian is only valid for Lipschitz continuous functions. + The function mirrors the Hessian along the diagonal since d2f/dxdy = d2f/dydx for continuously differentiable functions. + + Multivariate function handle.> + Points at which to evaluate Hessian.> + Hessian tensor. + + + + Resets the function evaluation counter for the Hessian. + + + + + Class for evaluating the Jacobian of a function using finite differences. + By default, a central 3-point method is used. + + + + + Number of function evaluations. + + + + + Creates a numerical Jacobian object with a three point central difference method. + + + + + Creates a numerical Jacobian with a specified differentiation scheme. + + Number of points for Jacobian evaluation. + Center point for differentiation. + + + + Evaluates the Jacobian of scalar univariate function f at point x. + + Scalar univariate function handle. + Point at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x. + + + This function assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function f at vector x given a current function value. + + + To minimize the number of function evaluations, a user can supply the current value of the function + to be used in computing the Jacobian. This value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function handle. + Points at which to evaluate Jacobian. + Current function value at finite difference center. + Jacobian vector. + + + + Evaluates the Jacobian of a multivariate function array f at vector x. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Jacobian matrix. + + + + Evaluates the Jacobian of a multivariate function array f at vector x given a vector of current function values. + + + To minimize the number of function evaluations, a user can supply a vector of current values of the functions + to be used in computing the Jacobian. These value must correspond to the "center" location for the + finite differencing. If a scheme is used where the center value is not evaluated, this will provide no + added efficiency. This method also assumes that the length of vector x consistent with the argument count of f. + + Multivariate function array handle. + Vector at which to evaluate Jacobian. + Vector of current function values. + Jacobian matrix. + + + + Resets the function evaluation counter for the Jacobian. + + + + + Evaluates the Riemann-Liouville fractional derivative that uses the double exponential integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Double-Exponential integration. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Legendre integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The number of Gauss-Legendre points. + Approximation of the differintegral of order n at x. + + + + Evaluates the Riemann-Liouville fractional derivative that uses the Gauss-Kronrod integration. + + + order = 1.0 : normal derivative + order = 0.5 : semi-derivative + order = -0.5 : semi-integral + order = -1.0 : normal integral + + The analytic smooth function to differintegrate. + The evaluation point. + The order of fractional derivative. + The reference point of integration. + The expected relative accuracy of the Gauss-Kronrod integration. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the differintegral of order n at x. + + + + Metrics to measure the distance between two structures. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Sum of Absolute Difference (SAD), i.e. the L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Mean-Absolute Error (MAE), i.e. the normalized L1-norm (Manhattan) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Sum of Squared Difference (SSD), i.e. the squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Mean-Squared Error (MSE), i.e. the normalized squared L2-norm (Euclidean) of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Euclidean Distance, i.e. the L2-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Manhattan Distance, i.e. the L1-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Chebyshev Distance, i.e. the Infinity-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Minkowski Distance, i.e. the generalized p-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Canberra Distance, a weighted version of the L1-norm of the difference. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Cosine Distance, representing the angular distance while ignoring the scale. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Hamming Distance, i.e. the number of positions that have different values in the vectors. + + + + + Pearson's distance, i.e. 1 - the person correlation coefficient. + + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Jaccard distance, i.e. 1 - the Jaccard index. + + Thrown if a or b are null. + Throw if a and b are of different lengths. + Jaccard distance. + + + + Discrete Univariate Bernoulli distribution. + The Bernoulli distribution is a distribution over bits. The parameter + p specifies the probability that a 1 is generated. + Wikipedia - Bernoulli distribution. + + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + If the Bernoulli parameter is not in the range [0,1]. + + + + Initializes a new instance of the Bernoulli class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + If the Bernoulli parameter is not in the range [0,1]. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Generates one sample from the Bernoulli distribution. + + The random source to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A random sample from the Bernoulli distribution. + + + + Samples a Bernoulli distributed random variable. + + A sample from the Bernoulli distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Samples a Bernoulli distributed random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + A sample from the Bernoulli distribution. + + + + Samples a sequence of Bernoulli distributed random variables. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + a sequence of samples from the distribution. + + + + Continuous Univariate Beta distribution. + For details about this distribution, see + Wikipedia - Beta distribution. + + + There are a few special cases for the parameterization of the Beta distribution. When both + shape parameters are positive infinity, the Beta distribution degenerates to a point distribution + at 0.5. When one of the shape parameters is positive infinity, the distribution degenerates to a point + distribution at the positive infinity. When both shape parameters are 0.0, the Beta distribution + degenerates to a Bernoulli distribution with parameter 0.5. When one shape parameter is 0.0, the + distribution degenerates to a point distribution at the non-zero shape parameter. + + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Beta class. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + A string representation of the Beta distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + Gets the α shape parameter of the Beta distribution. Range: α ≥ 0. + + + + + Gets the β shape parameter of the Beta distribution. Range: β ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Beta distribution. + + + + + Gets the variance of the Beta distribution. + + + + + Gets the standard deviation of the Beta distribution. + + + + + Gets the entropy of the Beta distribution. + + + + + Gets the skewness of the Beta distribution. + + + + + Gets the mode of the Beta distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the Beta distribution. + + + + + Gets the minimum of the Beta distribution. + + + + + Gets the maximum of the Beta distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Beta distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Beta distribution. + + a sequence of samples from the distribution. + + + + Samples Beta distributed random variables by sampling two Gamma variables and normalizing. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a random number from the Beta distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Beta-Binomial distribution. + The beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising + when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. + The beta-binomial distribution is the binomial distribution in which the probability of success at each of n trials is not fixed but randomly drawn from a beta distribution. + It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics to capture overdispersion in binomial type distributed data. + Wikipedia - Beta-Binomial distribution. + + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Initializes a new instance of the class. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location in the domain where we want to evaluate the probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The number of Bernoulli trials n - n is a positive integer + Shape parameter alpha of the Beta distribution. Range: a > 0. + Shape parameter beta of the Beta distribution. Range: b > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples BetaBinomial distributed random variables by sampling a Beta distribution then passing to a Binomial distribution. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a random number from the BetaBinomial distribution. + + + + Samples a BetaBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of BetaBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a BetaBinomial distributed random variable. + + The random number generator to use. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Samples an array of BetaBinomial distributed random variables. + + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the Beta distribution. Range: α ≥ 0. + The β shape parameter of the Beta distribution. Range: β ≥ 0. + The number of trials (n). Range: n ≥ 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Initializes a new instance of the BetaScaled class. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + + + + Create a Beta PERT distribution, used in risk analysis and other domains where an expert forecast + is used to construct an underlying beta distribution. + + The minimum value. + The maximum value. + The most likely value (mode). + The random number generator which is used to draw random samples. + The Beta distribution derived from the PERT parameters. + + + + A string representation of the distribution. + + A string representation of the BetaScaled distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the α shape parameter of the BetaScaled distribution. Range: α > 0. + + + + + Gets the β shape parameter of the BetaScaled distribution. Range: β > 0. + + + + + Gets the location (μ) of the BetaScaled distribution. + + + + + Gets the scale (σ) of the BetaScaled distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the BetaScaled distribution. + + + + + Gets the variance of the BetaScaled distribution. + + + + + Gets the standard deviation of the BetaScaled distribution. + + + + + Gets the entropy of the BetaScaled distribution. + + + + + Gets the skewness of the BetaScaled distribution. + + + + + Gets the mode of the BetaScaled distribution; when there are multiple answers, this routine will return 0.5. + + + + + Gets the median of the BetaScaled distribution. + + + + + Gets the minimum of the BetaScaled distribution. + + + + + Gets the maximum of the BetaScaled distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The α shape parameter of the BetaScaled distribution. Range: α > 0. + The β shape parameter of the BetaScaled distribution. Range: β > 0. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Binomial distribution. + For details about this distribution, see + Wikipedia - Binomial distribution. + + + The distribution is parameterized by a probability (between 0.0 and 1.0). + + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + If is not in the interval [0.0,1.0]. + If is negative. + + + + Initializes a new instance of the Binomial class. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The random number generator which is used to draw random samples. + If is not in the interval [0.0,1.0]. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + + + + Gets the success probability in each trial. Range: 0 ≤ p ≤ 1. + + + + + Gets the number of trials. Range: n ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets all modes of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the Binomial distribution without doing parameter checking. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successful trials. + + + + Samples a Binomially distributed random variable. + + The number of successes in N trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Binomially distributed random variables. + + a sequence of successes in N trials. + + + + Samples a binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The random number generator to use. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Samples a binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + The number of successes in trials. + + + + Samples a sequence of binomially distributed random variable. + + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The success probability (p) in each trial. Range: 0 ≤ p ≤ 1. + The number of trials (n). Range: n ≥ 0. + a sequence of successes in trials. + + + + Gets the scale (a) of the distribution. Range: a > 0. + + + + + Gets the first shape parameter (c) of the distribution. Range: c > 0. + + + + + Gets the second shape parameter (k) of the distribution. Range: k > 0. + + + + + Initializes a new instance of the Burr Type XII class. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Burr distribution. + + + + + Gets the variance of the Burr distribution. + + + + + Gets the standard deviation of the Burr distribution. + + + + + Gets the mode of the Burr distribution. + + + + + Gets the minimum of the Burr distribution. + + + + + Gets the maximum of the Burr distribution. + + + + + Gets the entropy of the Burr distribution (currently not supported). + + + + + Gets the skewness of the Burr distribution. + + + + + Gets the median of the Burr distribution. + + + + + Generates a sample from the Burr distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the Burr distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale parameter a of the Burr distribution. Range: a > 0. + The first shape parameter c of the Burr distribution. Range: c > 0. + The second shape parameter k of the Burr distribution. Range: k > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Discrete Univariate Categorical distribution. + For details about this distribution, see + Wikipedia - Categorical distribution. This + distribution is sometimes called the Discrete distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + Support: 0..k where k = length(probability mass array)-1 + + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + + + + Initializes a new instance of the Categorical class from a . The distribution + will not be automatically updated when the histogram changes. The categorical distribution will have + one value for each bucket and a probability for that value proportional to the bucket count. + + The histogram from which to create the categorical variable. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Checks whether the parameters of the distribution are valid. + + An array of nonnegative ratios: this array does not need to be normalized as this is often impossible using floating point arithmetic. + If any of the probabilities are negative returns false, or if the sum of parameters is 0.0; otherwise true + + + + Gets the probability mass vector (non-negative ratios) of the multinomial. + + Sometimes the normalized probability vector cannot be represented exactly in a floating point representation. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a . + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets he mode of the distribution. + + Throws a . + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. + + An array corresponding to a CDF for a categorical distribution. Not assumed to be normalized. + A real number between 0 and 1. + An integer between 0 and the size of the categorical (exclusive), that corresponds to the inverse CDF for the given probability. + + + + Computes the cumulative distribution function. This method performs no parameter checking. + If the probability mass was normalized, the resulting cumulative distribution is normalized as well (up to numerical errors). + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + An array representing the unnormalized cumulative distribution function. + + + + Returns one trials from the categorical distribution. + + The random number generator to use. + The (unnormalized) cumulative distribution of the probability distribution. + One sample from the categorical distribution implied by . + + + + Samples a Binomially distributed random variable. + + The number of successful trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Bernoulli distributed random variables. + + a sequence of successful trial counts. + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of nonnegative ratios. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of nonnegative ratios. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + The random number generator to use. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Samples one categorical distributed random variable; also known as the Discrete distribution. + + An array of the cumulative distribution. Not assumed to be normalized. + One random integer between 0 and the size of the categorical (exclusive). + + + + Samples a categorically distributed random variable. + + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + An array of the cumulative distribution. Not assumed to be normalized. + random integers between 0 and the size of the categorical (exclusive). + + + + Continuous Univariate Cauchy distribution. + The Cauchy distribution is a symmetric continuous probability distribution. For details about this distribution, see + Wikipedia - Cauchy distribution. + + + + + Initializes a new instance of the class with the location parameter set to 0 and the scale parameter set to 1 + + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Initializes a new instance of the class. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + + + + Gets the location (x0) of the distribution. + + + + + Gets the scale (γ) of the distribution. Range: γ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (x0) of the distribution. + The scale (γ) of the distribution. Range: γ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi distribution. + This distribution is a continuous probability distribution. The distribution usually arises when a k-dimensional vector's orthogonal + components are independent and each follow a standard normal distribution. The length of the vector will + then have a chi distribution. + Wikipedia - Chi distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Chi distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Chi distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Chi-Squared distribution. + This distribution is a sum of the squares of k independent standard normal random variables. + Wikipedia - ChiSquare distribution. + + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Initializes a new instance of the class. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + + + + Gets the degrees of freedom (k) of the Chi-Squared distribution. Range: k > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ChiSquare distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ChiSquare distribution. + + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a random number from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The degrees of freedom (k) of the distribution. Range: k > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The degrees of freedom (k) of the distribution. Range: k > 0. + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + Generates a sample from the ChiSquare distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sample from the ChiSquare distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The degrees of freedom (k) of the distribution. Range: k > 0. + a sample from the distribution. + + + + Continuous Univariate Uniform distribution. + The continuous uniform distribution is a distribution over real numbers. For details about this distribution, see + Wikipedia - Continuous uniform distribution. + + + + + Initializes a new instance of the ContinuousUniform class with lower bound 0 and upper bound 1. + + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + If the upper bound is smaller than the lower bound. + + + + Initializes a new instance of the ContinuousUniform class with given lower and upper bounds. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + If the upper bound is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + the inverse cumulative density at . + + + + + Generates a sample from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Generates a sample from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a uniformly distributed sample. + + + + Generates a sequence of samples from the ContinuousUniform distribution. + + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of uniformly distributed samples. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ upper. + Upper bound. Range: lower ≤ upper. + a sequence of samples from the distribution. + + + + Discrete Univariate Conway-Maxwell-Poisson distribution. + The Conway-Maxwell-Poisson distribution is a generalization of the Poisson, Geometric and Bernoulli + distributions. It is parameterized by two real numbers "lambda" and "nu". For + + nu = 0 the distribution reverts to a Geometric distribution + nu = 1 the distribution reverts to the Poisson distribution + nu -> infinity the distribution converges to a Bernoulli distribution + + This implementation will cache the value of the normalization constant. + Wikipedia - ConwayMaxwellPoisson distribution. + + + + + The mean of the distribution. + + + + + The variance of the distribution. + + + + + Caches the value of the normalization constant. + + + + + Since many properties of the distribution can only be computed approximately, the tolerance + level specifies how much error we accept. + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Gets the lambda (λ) parameter. Range: λ > 0. + + + + + Gets the rate of decay (ν) parameter. Range: ν ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + the cumulative distribution at location . + + + + + Gets the normalization constant of the Conway-Maxwell-Poisson distribution. + + + + + Computes an approximate normalization constant for the CMP distribution. + + The lambda (λ) parameter for the CMP distribution. + The rate of decay (ν) parameter for the CMP distribution. + + an approximate normalization constant for the CMP distribution. + + + + + Returns one trials from the distribution. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + The z parameter. + + One sample from the distribution implied by , , and . + + + + + Samples a Conway-Maxwell-Poisson distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples a sequence of a Conway-Maxwell-Poisson distributed random variables. + + + a sequence of samples from a Conway-Maxwell-Poisson distribution. + + + + + Samples a random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Samples a sequence of this random variable. + + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter. Range: λ > 0. + The rate of decay (ν) parameter. Range: ν ≥ 0. + + + + Multivariate Dirichlet distribution. For details about this distribution, see + Wikipedia - Dirichlet distribution. + + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + + + + Initializes a new instance of the Dirichlet class. The distribution will + be initialized with the default random number generator. + + An array with the Dirichlet parameters. + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + + + + Initializes a new instance of the class. + random number generator. + The value of each parameter of the Dirichlet distribution. + The dimension of the Dirichlet distribution. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + No parameter can be less than zero and at least one parameter should be larger than zero. + + The parameters of the Dirichlet distribution. + + + + Gets or sets the parameters of the Dirichlet distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the dimension of the Dirichlet distribution. + + + + + Gets the sum of the Dirichlet parameters. + + + + + Gets the mean of the Dirichlet distribution. + + + + + Gets the variance of the Dirichlet distribution. + + + + + Gets the entropy of the distribution. + + + + + Computes the density of the distribution. + + The locations at which to compute the density. + the density at . + The Dirichlet distribution requires that the sum of the components of x equals 1. + You can also leave out the last component, and it will be computed from the others. + + + + Computes the log density of the distribution. + + The locations at which to compute the density. + the density at . + + + + Samples a Dirichlet distributed random vector. + + A sample from this distribution. + + + + Samples a Dirichlet distributed random vector. + + The random number generator to use. + The Dirichlet distribution parameter. + a sample from the distribution. + + + + Discrete Univariate Uniform distribution. + The discrete uniform distribution is a distribution over integers. The distribution + is parameterized by a lower and upper bound (both inclusive). + Wikipedia - Discrete uniform distribution. + + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Initializes a new instance of the DiscreteUniform class. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + + + + Gets the inclusive lower bound of the probability distribution. + + + + + Gets the inclusive upper bound of the probability distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution; since every element in the domain has the same probability this method returns the middle one. + + + + + Gets the median of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + the cumulative distribution at location . + + + + + Generates one sample from the discrete uniform distribution. This method does not do any parameter checking. + + The random source to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A random sample from the discrete uniform distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of uniformly distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a uniformly distributed random variable. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + The random number generator to use. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Samples a uniformly distributed random variable. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + A sample from the discrete uniform distribution. + + + + Samples a sequence of uniformly distributed random variables. + + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound, inclusive. Range: lower ≤ upper. + Upper bound, inclusive. Range: lower ≤ upper. + a sequence of samples from the discrete uniform distribution. + + + + Continuous Univariate Erlang distribution. + This distribution is a continuous probability distribution with wide applicability primarily due to its + relation to the exponential and Gamma distributions. + Wikipedia - Erlang distribution. + + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Erlang distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The scale (μ) of the Erlang distribution. Range: μ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Erlang distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + Gets the shape (k) of the Erlang distribution. Range: k ≥ 0. + + + + + Gets the rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + + + + + Gets the scale of the Erlang distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum value. + + + + + Gets the Maximum value. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Generates a sample from the Erlang distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Erlang distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Erlang distribution. Range: k ≥ 0. + The rate or inverse scale (λ) of the Erlang distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Exponential distribution. + The exponential distribution is a distribution over the real numbers parameterized by one non-negative parameter. + Wikipedia - exponential distribution. + + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Initializes a new instance of the class. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + Gets the rate (λ) parameter of the distribution. Range: λ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Exponential distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The random number generator to use. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Draws a random sample from the distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sequence of samples from the Exponential distribution. + + The rate (λ) parameter of the distribution. Range: λ ≥ 0. + a sequence of samples from the distribution. + + + + Continuous Univariate F-distribution, also known as Fisher-Snedecor distribution. + For details about this distribution, see + Wikipedia - FisherSnedecor distribution. + + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Initializes a new instance of the class. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + Gets the first degree of freedom (d1) of the distribution. Range: d1 > 0. + + + + + Gets the second degree of freedom (d2) of the distribution. Range: d2 > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the FisherSnedecor distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the FisherSnedecor distribution. + + a sequence of samples from the distribution. + + + + Generates one sample from the FisherSnedecor distribution without parameter checking. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a FisherSnedecor distributed random number. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The first degree of freedom (d1) of the distribution. Range: d1 > 0. + The second degree of freedom (d2) of the distribution. Range: d2 > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Gamma distribution. + For details about this distribution, see + Wikipedia - Gamma distribution. + + + The Gamma distribution is parametrized by a shape and inverse scale parameter. When we want + to specify a Gamma distribution which is a point distribution we set the shape parameter to be the + location of the point distribution and the inverse scale as positive infinity. The distribution + with shape and inverse scale both zero is undefined. + + Random number generation for the Gamma distribution is based on the algorithm in: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Initializes a new instance of the Gamma class. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a Gamma distribution from a shape and scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k) of the Gamma distribution. Range: k ≥ 0. + The scale (θ) of the Gamma distribution. Range: θ ≥ 0 + The random number generator which is used to draw random samples. Optional, can be null. + + + + Constructs a Gamma distribution from a shape and inverse scale parameter. The distribution will + be initialized with the default random number generator. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + Gets or sets the shape (k, α) of the Gamma distribution. Range: α ≥ 0. + + + + + Gets or sets the rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + + + + + Gets or sets the scale (θ) of the Gamma distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Gamma distribution. + + + + + Gets the variance of the Gamma distribution. + + + + + Gets the standard deviation of the Gamma distribution. + + + + + Gets the entropy of the Gamma distribution. + + + + + Gets the skewness of the Gamma distribution. + + + + + Gets the mode of the Gamma distribution. + + + + + Gets the median of the Gamma distribution. + + + + + Gets the minimum of the Gamma distribution. + + + + + Gets the maximum of the Gamma distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Gamma distribution. + + a sequence of samples from the distribution. + + + + Sampling implementation based on: + "A Simple Method for Generating Gamma Variables" - Marsaglia & Tsang + ACM Transactions on Mathematical Software, Vol. 26, No. 3, September 2000, Pages 363–372. + This method performs no parameter checks. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + A sample from a Gamma distributed random variable. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + the inverse cumulative density at . + + + + + Generates a sample from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The random number generator to use. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Gamma distribution. + + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k, α) of the Gamma distribution. Range: α ≥ 0. + The rate or inverse scale (β) of the Gamma distribution. Range: β ≥ 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Geometric distribution. + The Geometric distribution is a distribution over positive integers parameterized by one positive real number. + This implementation of the Geometric distribution will never generate 0's. + Wikipedia - geometric distribution. + + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the Geometric class. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + A that represents this instance. + + + + Tests whether the provided values are valid parameters for this distribution. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Gets the probability of generating a one. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + Throws a not supported exception. + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Returns one sample from the distribution. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + One sample from the distribution implied by . + + + + Samples a Geometric distributed random variable. + + A sample from the Geometric distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Geometric distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The probability (p) of generating one. Range: 0 ≤ p ≤ 1. + + + + Discrete Univariate Hypergeometric distribution. + This distribution is a discrete probability distribution that describes the number of successes in a sequence + of n draws from a finite population without replacement, just as the binomial distribution + describes the number of successes for draws with replacement + Wikipedia - Hypergeometric distribution. + + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Initializes a new instance of the Hypergeometric class. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the size of the population (N). + + + + + Gets the number of draws without replacement (n). + + + + + Gets the number successes within the population (K, M). + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + the cumulative distribution at location . + + + + + Generates a sample from the Hypergeometric distribution without doing parameter checking. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The n parameter of the distribution. + a random number from the Hypergeometric distribution. + + + + Samples a Hypergeometric distributed random variable. + + The number of successes in n trials. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Hypergeometric distributed random variables. + + a sequence of successes in n trials. + + + + Samples a random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The random number generator to use. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Samples a sequence of this random variable. + + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The size of the population (N). + The number successes within the population (K, M). + The number of draws without replacement (n). + + + + Continuous Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by a double. + + + + + Gets the largest element in the domain of the distribution which can be represented by a double. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Discrete Univariate Probability Distribution. + + + + + + Gets the mode of the distribution. + + + + + Gets the smallest element in the domain of the distribution which can be represented by an integer. + + + + + Gets the largest element in the domain of the distribution which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Draws a sequence of random samples from the distribution. + + an infinite sequence of samples from the distribution. + + + + Probability Distribution. + + + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Continuous Univariate Inverse Gamma distribution. + The inverse Gamma distribution is a distribution over the positive real numbers parameterized by + two positive parameters. + Wikipedia - InverseGamma distribution. + + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Initializes a new instance of the class. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + + + + Gets or sets the shape (α) parameter. Range: α > 0. + + + + + Gets or sets The scale (β) parameter. Range: β > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Throws . + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Cauchy distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (α) of the distribution. Range: α > 0. + The scale (β) of the distribution. Range: β > 0. + a sequence of samples from the distribution. + + + + Gets the mean (μ) of the distribution. Range: μ > 0. + + + + + Gets the shape (λ) of the distribution. Range: λ > 0. + + + + + Initializes a new instance of the InverseGaussian class. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Inverse Gaussian distribution. + + + + + Gets the variance of the Inverse Gaussian distribution. + + + + + Gets the standard deviation of the Inverse Gaussian distribution. + + + + + Gets the median of the Inverse Gaussian distribution. + No closed form analytical expression exists, so this value is approximated numerically and can throw an exception. + + + + + Gets the minimum of the Inverse Gaussian distribution. + + + + + Gets the maximum of the Inverse Gaussian distribution. + + + + + Gets the skewness of the Inverse Gaussian distribution. + + + + + Gets the kurtosis of the Inverse Gaussian distribution. + + + + + Gets the mode of the Inverse Gaussian distribution. + + + + + Gets the entropy of the Inverse Gaussian distribution (currently not supported). + + + + + Generates a sample from the inverse Gaussian distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the inverse Gaussian distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the inverse Gaussian distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + + + + Generates a sequence of samples from the Burr distribution. + + The random number generator to use. + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The mean (μ) of the distribution. Range: μ > 0. + The shape (λ) of the distribution. Range: λ > 0. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Estimates the Inverse Gaussian parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + An Inverse Gaussian distribution. + + + + Multivariate Inverse Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The inverse Wishart distribution + is the conjugate prior for the covariance matrix of a multivariate normal distribution. + Wikipedia - Inverse-Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Initializes a new instance of the class. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + + + + Gets or sets the degree of freedom (ν) for the inverse Wishart distribution. + + + + + Gets or sets the scale matrix (Ψ) for the inverse Wishart distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. 2B (2 ed.). Arnold. ISBN 0-340-80752-0. + + + + Gets the variance of the distribution. + + The variance of the distribution. + Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. + + + + Evaluates the probability density function for the inverse Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + a sample from the distribution. + + + + Samples an inverse Wishart distributed random variable by sampling + a Wishart random variable and inverting the matrix. + + The random number generator to use. + The degree of freedom (ν) for the inverse Wishart distribution. + The scale matrix (Ψ) for the inverse Wishart distribution. + a sample from the distribution. + + + + Univariate Probability Distribution. + + + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the median of the distribution. + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Continuous Univariate Laplace distribution. + The Laplace distribution is a distribution over the real numbers parameterized by a mean and + scale parameter. The PDF is: + p(x) = \frac{1}{2 * scale} \exp{- |x - mean| / scale}. + Wikipedia - Laplace distribution. + + + + + Initializes a new instance of the class (location = 0, scale = 1). + + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + If is negative. + + + + Initializes a new instance of the class. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + + + + Gets the location (μ) of the Laplace distribution. + + + + + Gets the scale (b) of the Laplace distribution. Range: b > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Samples a Laplace distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sample from the Laplace distribution. + + a sample from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (b) of the distribution. Range: b > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Log-Normal distribution. + For details about this distribution, see + Wikipedia - Log-Normal distribution. + + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the logarithm of the distribution. + The shape (σ) of the logarithm of the distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the class. + The distribution will be initialized with the default + random number generator. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a log-normal distribution with the desired mu and sigma parameters. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Constructs a log-normal distribution with the desired mean and variance. + + The mean of the log-normal distribution. + The variance of the log-normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + + + + Estimates the log-normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A log-normal distribution. + MATLAB: lognfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + + + + Gets the log-scale (μ) (mean of the logarithm) of the distribution. + + + + + Gets the shape (σ) (standard deviation of the logarithm) of the distribution. Range: σ ≥ 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mu of the log-normal distribution. + + + + + Gets the variance of the log-normal distribution. + + + + + Gets the standard deviation of the log-normal distribution. + + + + + Gets the entropy of the log-normal distribution. + + + + + Gets the skewness of the log-normal distribution. + + + + + Gets the mode of the log-normal distribution. + + + + + Gets the median of the log-normal distribution. + + + + + Gets the minimum of the log-normal distribution. + + + + + Gets the maximum of the log-normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the density at . + + MATLAB: lognpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: logncdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: logninv + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the log-normal distribution using the Box-Muller algorithm. + + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The log-scale (μ) of the distribution. + The shape (σ) of the distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Multivariate Matrix-valued Normal distributions. The distribution + is parameterized by a mean matrix (M), a covariance matrix for the rows (V) and a covariance matrix + for the columns (K). If the dimension of M is d-by-m then V is d-by-d and K is m-by-m. + Wikipedia - MatrixNormal distribution. + + + + + The mean of the matrix normal distribution. + + + + + The covariance matrix for the rows. + + + + + The covariance matrix for the columns. + + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + + + + Initializes a new instance of the class. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + The random number generator which is used to draw random samples. + If the dimensions of the mean and two covariance matrices don't match. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + + + + Gets the mean. (M) + + The mean of the distribution. + + + + Gets the row covariance. (V) + + The row covariance. + + + + Gets the column covariance. (K) + + The column covariance. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Evaluates the probability density function for the matrix normal distribution. + + The matrix at which to evaluate the density at. + the density at + If the argument does not have the correct dimensions. + + + + Samples a matrix normal distributed random variable. + + A random number from this distribution. + + + + Samples a matrix normal distributed random variable. + + The random number generator to use. + The mean of the matrix normal. + The covariance matrix for the rows. + The covariance matrix for the columns. + If the dimensions of the mean and two covariance matrices don't match. + a sequence of samples from the distribution. + + + + Samples a vector normal distributed random variable. + + The random number generator to use. + The mean of the vector normal distribution. + The covariance matrix of the vector normal distribution. + a sequence of samples from defined distribution. + + + + Multivariate Multinomial distribution. For details about this distribution, see + Wikipedia - Multinomial distribution. + + + The distribution is parameterized by a vector of ratios: in other words, the parameter + does not have to be normalized and sum to 1. The reason is that some vectors can't be exactly normalized + to sum to 1 in floating point representation. + + + + + Stores the normalized multinomial probabilities. + + + + + The number of trials. + + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + The random number generator which is used to draw random samples. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + Initializes a new instance of the Multinomial class from histogram . The distribution will + not be automatically updated when the histogram changes. + + Histogram instance + The number of trials. + If any of the probabilities are negative or do not sum to one. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + If any of the probabilities are negative returns false, + if the sum of parameters is 0.0, or if the number of trials is negative; otherwise true. + + + + Gets the proportion of ratios. + + + + + Gets the number of trials. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Computes values of the probability mass function. + + Non-negative integers x1, ..., xk + The probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Computes values of the log probability mass function. + + Non-negative integers x1, ..., xk + The log probability mass at location . + When is null. + When length of is not equal to event probabilities count. + + + + Samples one multinomial distributed random variable. + + the counts for each of the different possible values. + + + + Samples a sequence multinomially distributed random variables. + + a sequence of counts for each of the different possible values. + + + + Samples one multinomial distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of trials. + the counts for each of the different possible values. + + + + Samples a multinomially distributed random variable. + + The random number generator to use. + An array of nonnegative ratios: this array does not need to be normalized + as this is often impossible using floating point arithmetic. + The number of variables needed. + a sequence of counts for each of the different possible values. + + + + Discrete Univariate Negative Binomial distribution. + The negative binomial is a distribution over the natural numbers with two parameters r, p. For the special + case that r is an integer one can interpret the distribution as the number of failures before the r'th success + when the probability of success is p. + Wikipedia - NegativeBinomial distribution. + + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Initializes a new instance of the class. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + The random number generator which is used to draw random samples. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Gets the number of successes. Range: r ≥ 0. + + + + + Gets the probability of success. Range: 0 ≤ p ≤ 1. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + the cumulative distribution at location . + + + + + Samples a negative binomial distributed random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + a sample from the distribution. + + + + Samples a NegativeBinomial distributed random variable. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of NegativeBinomial distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Samples a sequence of this random variable. + + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The number of successes (r) required to stop the experiment. Range: r ≥ 0. + The probability (p) of a trial resulting in success. Range: 0 ≤ p ≤ 1. + + + + Continuous Univariate Normal distribution, also known as Gaussian distribution. + For details about this distribution, see + Wikipedia - Normal distribution. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + + + + Initializes a new instance of the Normal class. This is a normal distribution with mean 0.0 + and standard deviation 1.0. The distribution will + be initialized with the default random number generator. + + The random number generator which is used to draw random samples. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Initializes a new instance of the Normal class with a particular mean and standard deviation. The distribution will + be initialized with the default random number generator. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. + + + + Constructs a normal distribution from a mean and standard deviation. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The random number generator which is used to draw random samples. Optional, can be null. + a normal distribution. + + + + Constructs a normal distribution from a mean and variance. + + The mean (μ) of the normal distribution. + The variance (σ^2) of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Constructs a normal distribution from a mean and precision. + + The mean (μ) of the normal distribution. + The precision of the normal distribution. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + + + + Estimates the normal distribution parameters from sample data with maximum-likelihood. + + The samples to estimate the distribution parameters from. + The random number generator which is used to draw random samples. Optional, can be null. + A normal distribution. + MATLAB: normfit + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + Gets the mean (μ) of the normal distribution. + + + + + Gets the standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + + + + + Gets the variance of the normal distribution. + + + + + Gets the precision of the normal distribution. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the entropy of the normal distribution. + + + + + Gets the skewness of the normal distribution. + + + + + Gets the mode of the normal distribution. + + + + + Gets the median of the normal distribution. + + + + + Gets the minimum of the normal distribution. + + + + + Gets the maximum of the normal distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the density at . + + MATLAB: normpdf + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the cumulative distribution at location . + + MATLAB: normcdf + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + the inverse cumulative density at . + + MATLAB: norminv + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The random number generator to use. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Generates a sample from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sample from the distribution. + + + + Generates a sequence of samples from the normal distribution using the Box-Muller algorithm. + + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The mean (μ) of the normal distribution. + The standard deviation (σ) of the normal distribution. Range: σ ≥ 0. + a sequence of samples from the distribution. + + + + This structure represents the type over which the distribution + is defined. + + + + + Initializes a new instance of the struct. + + The mean of the pair. + The precision of the pair. + + + + Gets or sets the mean of the pair. + + + + + Gets or sets the precision of the pair. + + + + + Multivariate Normal-Gamma Distribution. + The distribution is the conjugate prior distribution for the + distribution. It specifies a prior over the mean and precision of the distribution. + It is parameterized by four numbers: the mean location, the mean scale, the precision shape and the + precision inverse scale. + The distribution NG(mu, tau | mloc,mscale,psscale,pinvscale) = Normal(mu | mloc, 1/(mscale*tau)) * Gamma(tau | psscale,pinvscale). + The following degenerate cases are special: when the precision is known, + the precision shape will encode the value of the precision while the precision inverse scale is positive + infinity. When the mean is known, the mean location will encode the value of the mean while the scale + will be positive infinity. A completely degenerate NormalGamma distribution with known mean and precision is possible as well. + Wikipedia - Normal-Gamma distribution. + + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Initializes a new instance of the class. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + + + + Gets the location of the mean. + + + + + Gets the scale of the mean. + + + + + Gets the shape of the precision. + + + + + Gets the inverse scale of the precision. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Returns the marginal distribution for the mean of the NormalGamma distribution. + + the marginal distribution for the mean of the NormalGamma distribution. + + + + Returns the marginal distribution for the precision of the distribution. + + The marginal distribution for the precision of the distribution/ + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the variance of the distribution. + + The mean of the distribution. + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + Density value + + + + Evaluates the probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + Density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean/precision pair of the distribution + The log of the density value + + + + Evaluates the log probability density function for a NormalGamma distribution. + + The mean of the distribution + The precision of the distribution + The log of the density value + + + + Generates a sample from the NormalGamma distribution. + + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + a sequence of samples from the distribution. + + + + Generates a sample from the NormalGamma distribution. + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sample from the distribution. + + + + Generates a sequence of samples from the NormalGamma distribution + + The random number generator to use. + The location of the mean. + The scale of the mean. + The shape of the precision. + The inverse scale of the precision. + a sequence of samples from the distribution. + + + + Continuous Univariate Pareto distribution. + The Pareto distribution is a power law probability distribution that coincides with social, + scientific, geophysical, actuarial, and many other types of observable phenomena. + For details about this distribution, see + Wikipedia - Pareto distribution. + + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + If or are negative. + + + + Initializes a new instance of the class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The random number generator which is used to draw random samples. + If or are negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Pareto distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + a sequence of samples from the distribution. + + + + Discrete Univariate Poisson distribution. + + + Distribution is described at Wikipedia - Poisson distribution. + Knuth's method is used to generate Poisson distributed random variables. + f(x) = exp(-λ)*λ^x/x!; + + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + If is equal or less then 0.0. + + + + Initializes a new instance of the class. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + If is equal or less then 0.0. + + + + Returns a that represents this instance. + + + A that represents this instance. + + + + + Tests whether the provided values are valid parameters for this distribution. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + + + + Gets the Poisson distribution parameter λ. Range: λ > 0. + + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Gets the skewness of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + Approximation, see Wikipedia Poisson distribution + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Generates one sample from the Poisson distribution. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by Knuth's method. + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + + + + Generates one sample from the Poisson distribution by "Rejection method PA". + + The random source to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A random sample from the Poisson distribution. + "Rejection method PA" from "The Computer Generation of Poisson Random Variables" by A. C. Atkinson, + Journal of the Royal Statistical Society Series C (Applied Statistics) Vol. 28, No. 1. (1979) + The article is on pages 29-35. The algorithm given here is on page 32. + + + + Samples a Poisson distributed random variable. + + A sample from the Poisson distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of Poisson distributed random variables. + + a sequence of successes in N trials. + + + + Samples a Poisson distributed random variable. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The random number generator to use. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Samples a Poisson distributed random variable. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + A sample from the Poisson distribution. + + + + Samples a sequence of Poisson distributed random variables. + + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The lambda (λ) parameter of the Poisson distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Rayleigh distribution. + The Rayleigh distribution (pronounced /ˈreɪli/) is a continuous probability distribution. As an + example of how it arises, the wind speed will have a Rayleigh distribution if the components of + the two-dimensional wind velocity vector are uncorrelated and normally distributed with equal variance. + For details about this distribution, see + Wikipedia - Rayleigh distribution. + + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + If is negative. + + + + Initializes a new instance of the class. + + The scale (σ) of the distribution. Range: σ > 0. + The random number generator which is used to draw random samples. + If is negative. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (σ) of the distribution. Range: σ > 0. + + + + Gets the scale (σ) of the distribution. Range: σ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Rayleigh distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (σ) of the distribution. Range: σ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The scale (σ) of the distribution. Range: σ > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The scale (σ) of the distribution. Range: σ > 0. + the inverse cumulative density at . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The scale (σ) of the distribution. Range: σ > 0. + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized Error Distribution (SGED). + Implements the univariate SSkewed Generalized Error Distribution. For details about this + distribution, see + + Wikipedia - Generalized Error Distribution. + It includes Laplace, Normal and Student-t distributions. + This is the distribution with q=Inf. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedError class. This is a generalized error distribution + with location=0.0, scale=1.0, skew=0.0 and p=2.0 (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Generates a sample from the Skew Generalized Error distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized Error distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized Error distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized Error distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + Parameter that controls kurtosis. Range: p > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Skewed Generalized T-distribution. + Implements the univariate Skewed Generalized t-distribution. For details about this + distribution, see + + Wikipedia - Skewed generalized t-distribution. + The skewed generalized t-distribution contains many different distributions within it + as special cases based on the parameterization chosen. + + This implementation is based on the R package dsgt and corresponding viginette, see + https://cran.r-project.org/web/packages/sgt/vignettes/sgt.pdf. Compared to that + implementation, the options for mean adjustment and variance adjustment are always true. + The location (μ) is the mean of the distribution. + The scale (σ) squared is the variance of the distribution. + + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. + + + + Initializes a new instance of the SkewedGeneralizedT class. This is a skewed generalized t-distribution + with location=0.0, scale=1.0, skew=0.0, p=2.0 and q=Inf (a standard normal distribution). + + + + + Initializes a new instance of the SkewedGeneralizedT class with a particular location, scale, skew + and kurtosis parameters. Different parameterizations result in different distributions. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Given a parameter set, returns the distribution that matches this parameterization. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + Null if no known distribution matches the parameterization, else the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + + + + Gets the location (μ) of the Skewed Generalized t-distribution. + + + + + Gets the scale (σ) of the Skewed Generalized t-distribution. Range: σ > 0. + + + + + Gets the skew (λ) of the Skewed Generalized t-distribution. Range: 1 > λ > -1. + + + + + Gets the first parameter that controls the kurtosis of the distribution. Range: p > 0. + + + + + Gets the second parameter that controls the kurtosis of the distribution. Range: q > 0. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + The location at which to compute the density. + the density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + the inverse cumulative density at . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Skew Generalized t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Generates a sample from the Skew Generalized t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sample from the distribution. + + + + Generates a sequence of samples from the Skew Generalized t-distribution using inverse transform. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Fills an array with samples from the Skew Generalized t-distribution using inverse transform. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The skew, 1 > λ > -1 + First parameter that controls kurtosis. Range: p > 0 + Second parameter that controls kurtosis. Range: q > 0 + a sequence of samples from the distribution. + + + + Continuous Univariate Stable distribution. + A random variable is said to be stable (or to have a stable distribution) if it has + the property that a linear combination of two independent copies of the variable has + the same distribution, up to location and scale parameters. + For details about this distribution, see + Wikipedia - Stable distribution. + + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Initializes a new instance of the class. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + + + + Gets the stability (α) of the distribution. Range: 2 ≥ α > 0. + + + + + Gets The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + + + + + Gets the scale (c) of the distribution. Range: c > 0. + + + + + Gets the location (μ) of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets he entropy of the distribution. + + Always throws a not supported exception. + + + + Gets the skewness of the distribution. + + Throws a not supported exception of Alpha != 2. + + + + Gets the mode of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the median of the distribution. + + Throws a not supported exception if Beta != 0. + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + Throws a not supported exception if Alpha != 2, (Alpha != 1 and Beta !=0), or (Alpha != 0.5 and Beta != 1) + + + + Samples the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a random number from the distribution. + + + + Draws a random sample from the distribution. + + A random number from this distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Stable distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The random number generator to use. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Generates a sample from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sample from the distribution. + + + + Generates a sequence of samples from the distribution. + + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The stability (α) of the distribution. Range: 2 ≥ α > 0. + The skewness (β) of the distribution. Range: 1 ≥ β ≥ -1. + The scale (c) of the distribution. Range: c > 0. + The location (μ) of the distribution. + a sequence of samples from the distribution. + + + + Continuous Univariate Student's T-distribution. + Implements the univariate Student t-distribution. For details about this + distribution, see + + Wikipedia - Student's t-distribution. + + We use a slightly generalized version (compared to + Wikipedia) of the Student t-distribution. Namely, one which also + parameterizes the location and scale. See the book "Bayesian Data + Analysis" by Gelman et al. for more details. + The density of the Student t-distribution p(x|mu,scale,dof) = + Gamma((dof+1)/2) (1 + (x - mu)^2 / (scale * scale * dof))^(-(dof+1)/2) / + (Gamma(dof/2)*Sqrt(dof*pi*scale)). + The distribution will use the by + default. Users can get/set the random number generator by using the + property. + The statistics classes will check all the incoming parameters + whether they are in the allowed range. This might involve heavy + computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the StudentT class. This is a Student t-distribution with location 0.0 + scale 1.0 and degrees of freedom 1. + + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Initializes a new instance of the StudentT class with a particular location, scale and degrees of + freedom. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + + + + Gets the location (μ) of the Student t-distribution. + + + + + Gets the scale (σ) of the Student t-distribution. Range: σ > 0. + + + + + Gets the degrees of freedom (ν) of the Student t-distribution. Range: ν > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Student t-distribution. + + + + + Gets the variance of the Student t-distribution. + + + + + Gets the standard deviation of the Student t-distribution. + + + + + Gets the entropy of the Student t-distribution. + + + + + Gets the skewness of the Student t-distribution. + + + + + Gets the mode of the Student t-distribution. + + + + + Gets the median of the Student t-distribution. + + + + + Gets the minimum of the Student t-distribution. + + + + + Gets the maximum of the Student t-distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Samples student-t distributed random variables. + + The algorithm is method 2 in section 5, chapter 9 + in L. Devroye's "Non-Uniform Random Variate Generation" + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a random number from the standard student-t distribution. + + + + Generates a sample from the Student t-distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Student t-distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + the inverse cumulative density at . + + WARNING: currently not an explicit implementation, hence slow and unreliable. + + + + Generates a sample from the Student t-distribution. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The random number generator to use. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Student t-distribution. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Student t-distribution using the Box-Muller algorithm. + + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The location (μ) of the distribution. + The scale (σ) of the distribution. Range: σ > 0. + The degrees of freedom (ν) for the distribution. Range: ν > 0. + a sequence of samples from the distribution. + + + + Triangular distribution. + For details, see Wikipedia - Triangular distribution. + + The distribution will use the by default. + Users can get/set the random number generator by using the property. + The statistics classes will check whether all the incoming parameters are in the allowed range. This might involve heavy computation. Optionally, by setting Control.CheckDistributionParameters + to false, all parameter checks can be turned off. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + Initializes a new instance of the Triangular class with the given lower bound, upper bound and mode. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The random number generator which is used to draw random samples. + If the upper bound is smaller than the mode or if the mode is smaller than the lower bound. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + + + + Gets the lower bound of the distribution. + + + + + Gets the upper bound of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + + Gets the skewness of the distribution. + + + + + Gets or sets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + + Gets the minimum of the distribution. + + + + + Gets the maximum of the distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Triangular distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the cumulative distribution at location . + + + + + Computes the inverse of the cumulative distribution function (InvCDF) for the distribution + at the given probability. This is also known as the quantile or percent point function. + + The location at which to compute the inverse cumulative density. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + the inverse cumulative density at . + + + + + Generates a sample from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + The random number generator to use. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Generates a sample from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sample from the distribution. + + + + Generates a sequence of samples from the Triangular distribution. + + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + Lower bound. Range: lower ≤ mode ≤ upper + Upper bound. Range: lower ≤ mode ≤ upper + Mode (most frequent value). Range: lower ≤ mode ≤ upper + a sequence of samples from the distribution. + + + + Initializes a new instance of the TruncatedPareto class. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The random number generator which is used to draw random samples. + If or are non-positive or if T ≤ xm. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Gets the random number generator which is used to draw random samples. + + + + + Gets the scale (xm) of the distribution. Range: xm > 0. + + + + + Gets the shape (α) of the distribution. Range: α > 0. + + + + + Gets the truncation (T) of the distribution. Range: T > 0. + + + + + Gets the n-th raw moment of the distribution. + + The order (n) of the moment. Range: n ≥ 1. + the n-th moment of the distribution. + + + + Gets the mean of the truncated Pareto distribution. + + + + + Gets the variance of the truncated Pareto distribution. + + + + + Gets the standard deviation of the truncated Pareto distribution. + + + + + Gets the mode of the truncated Pareto distribution (not supported). + + + + + Gets the minimum of the truncated Pareto distribution. + + + + + Gets the maximum of the truncated Pareto distribution. + + + + + Gets the entropy of the truncated Pareto distribution (not supported). + + + + + Gets the skewness of the truncated Pareto distribution. + + + + + Gets the median of the truncated Pareto distribution. + + + + + Generates a sample from the truncated Pareto distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + a sequence of samples from the distribution. + + + + Generates a sample from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + + + + Generates a sequence of samples from the truncated Pareto distribution. + + The random number generator to use. + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + Computes the inverse cumulative distribution (CDF) of the distribution at p, i.e. solving for P(X ≤ x) = p. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the inverse cumulative distribution function. + the inverse cumulative distribution at location . + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the log density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The scale (xm) of the distribution. Range: xm > 0. + The shape (α) of the distribution. Range: α > 0. + The truncation (T) of the distribution. Range: T > xm. + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + + Continuous Univariate Weibull distribution. + For details about this distribution, see + Wikipedia - Weibull distribution. + + + The Weibull distribution is parametrized by a shape and scale parameter. + + + + + Reusable intermediate result 1 / (_scale ^ _shape) + + + By caching this parameter we can get slightly better numerics precision + in certain constellations without any additional computations. + + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Initializes a new instance of the Weibull class. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + + + + Gets the shape (k) of the Weibull distribution. Range: k > 0. + + + + + Gets the scale (λ) of the Weibull distribution. Range: λ > 0. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the Weibull distribution. + + + + + Gets the variance of the Weibull distribution. + + + + + Gets the standard deviation of the Weibull distribution. + + + + + Gets the entropy of the Weibull distribution. + + + + + Gets the skewness of the Weibull distribution. + + + + + Gets the mode of the Weibull distribution. + + + + + Gets the median of the Weibull distribution. + + + + + Gets the minimum of the Weibull distribution. + + + + + Gets the maximum of the Weibull distribution. + + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The location at which to compute the density. + the density at . + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The location at which to compute the log density. + the log density at . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Generates a sample from the Weibull distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Generates a sequence of samples from the Weibull distribution. + + a sequence of samples from the distribution. + + + + Computes the probability density of the distribution (PDF) at x, i.e. ∂P(X ≤ x)/∂x. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the density at . + + + + + Computes the log probability density of the distribution (lnPDF) at x, i.e. ln(∂P(X ≤ x)/∂x). + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + The location at which to compute the density. + the log density at . + + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + the cumulative distribution at location . + + + + + Implemented according to: Parameter estimation of the Weibull probability distribution, 1994, Hongzhu Qiao, Chris P. Tsokos + + + + Returns a Weibull distribution. + + + + Generates a sample from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The random number generator to use. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Generates a sample from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sample from the distribution. + + + + Generates a sequence of samples from the Weibull distribution. + + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The shape (k) of the Weibull distribution. Range: k > 0. + The scale (λ) of the Weibull distribution. Range: λ > 0. + a sequence of samples from the distribution. + + + + Multivariate Wishart distribution. This distribution is + parameterized by the degrees of freedom nu and the scale matrix S. The Wishart distribution + is the conjugate prior for the precision (inverse covariance) matrix of the multivariate + normal distribution. + Wikipedia - Wishart distribution. + + + + + The degrees of freedom for the Wishart distribution. + + + + + The scale matrix for the Wishart distribution. + + + + + Caches the Cholesky factorization of the scale matrix. + + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Initializes a new instance of the class. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The random number generator which is used to draw random samples. + + + + Tests whether the provided values are valid parameters for this distribution. + + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + + + + Gets or sets the degrees of freedom (n) for the Wishart distribution. + + + + + Gets or sets the scale matrix (V) for the Wishart distribution. + + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + The mean of the distribution. + + + + Gets the mode of the distribution. + + The mode of the distribution. + + + + Gets the variance of the distribution. + + The variance of the distribution. + + + + Evaluates the probability density function for the Wishart distribution. + + The matrix at which to evaluate the density at. + If the argument does not have the same dimensions as the scale matrix. + the density at . + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + A random number from this distribution. + + + + Samples a Wishart distributed random variable using the method + Algorithm AS 53: Wishart Variate Generator + W. B. Smith and R. R. Hocking + Applied Statistics, Vol. 21, No. 3 (1972), pp. 341-345 + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + a sequence of samples from the distribution. + + + + Samples the distribution. + + The random number generator to use. + The degrees of freedom (n) for the Wishart distribution. + The scale matrix (V) for the Wishart distribution. + The cholesky decomposition to use. + a random number from the distribution. + + + + Discrete Univariate Zipf distribution. + Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact + that many types of data studied in the physical and social sciences can be approximated with + a Zipfian distribution, one of a family of related discrete power law probability distributions. + For details about this distribution, see + Wikipedia - Zipf distribution. + + + + + The s parameter of the distribution. + + + + + The n parameter of the distribution. + + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Initializes a new instance of the class. + + The s parameter of the distribution. + The n parameter of the distribution. + The random number generator which is used to draw random samples. + + + + A string representation of the distribution. + + a string representation of the distribution. + + + + Tests whether the provided values are valid parameters for this distribution. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Gets or sets the s parameter of the distribution. + + + + + Gets or sets the n parameter of the distribution. + + + + + Gets or sets the random number generator which is used to draw random samples. + + + + + Gets the mean of the distribution. + + + + + Gets the variance of the distribution. + + + + + Gets the standard deviation of the distribution. + + + + + Gets the entropy of the distribution. + + + + + Gets the skewness of the distribution. + + + + + Gets the mode of the distribution. + + + + + Gets the median of the distribution. + + + + + Gets the smallest element in the domain of the distributions which can be represented by an integer. + + + + + Gets the largest element in the domain of the distributions which can be represented by an integer. + + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + the cumulative distribution at location . + + + + Computes the probability mass (PMF) at k, i.e. P(X = k). + + The location in the domain where we want to evaluate the probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the probability mass at location . + + + + Computes the log probability mass (lnPMF) at k, i.e. ln(P(X = k)). + + The location in the domain where we want to evaluate the log probability mass function. + The s parameter of the distribution. + The n parameter of the distribution. + the log probability mass at location . + + + + Computes the cumulative distribution (CDF) of the distribution at x, i.e. P(X ≤ x). + + The location at which to compute the cumulative distribution function. + The s parameter of the distribution. + The n parameter of the distribution. + the cumulative distribution at location . + + + + + Generates a sample from the Zipf distribution without doing parameter checking. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + a random number from the Zipf distribution. + + + + Draws a random sample from the distribution. + + a sample from the distribution. + + + + Fills an array with samples generated from the distribution. + + + + + Samples an array of zipf distributed random variables. + + a sequence of samples from the distribution. + + + + Samples a random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The random number generator to use. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The random number generator to use. + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Samples a sequence of this random variable. + + The s parameter of the distribution. + The n parameter of the distribution. + + + + Fills an array with samples generated from the distribution. + + The array to fill with the samples. + The s parameter of the distribution. + The n parameter of the distribution. + + + + Integer number theory functions. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Canonical Modulus. The result has the sign of the divisor. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Remainder (% operator). The result has the sign of the dividend. + + + + + Find out whether the provided 32 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 64 bit integer is an even number. + + The number to very whether it's even. + True if and only if it is an even number. + + + + Find out whether the provided 32 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 64 bit integer is an odd number. + + The number to very whether it's odd. + True if and only if it is an odd number. + + + + Find out whether the provided 32 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 64 bit integer is a perfect power of two. + + The number to very whether it's a power of two. + True if and only if it is a power of two. + + + + Find out whether the provided 32 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Find out whether the provided 64 bit integer is a perfect square, i.e. a square of an integer. + + The number to very whether it's a perfect square. + True if and only if it is a perfect square. + + + + Raises 2 to the provided integer exponent (0 <= exponent < 31). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Raises 2 to the provided integer exponent (0 <= exponent < 63). + + The exponent to raise 2 up to. + 2 ^ exponent. + + + + + Evaluate the binary logarithm of an integer number. + + Two-step method using a De Bruijn-like sequence table lookup. + + + + Find the closest perfect power of two that is larger or equal to the provided + 32 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Find the closest perfect power of two that is larger or equal to the provided + 64 bit integer. + + The number of which to find the closest upper power of two. + A power of two. + + + + + Returns the greatest common divisor (gcd) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's + algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of integers using Euclid's algorithm. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two integers using Euclid's algorithm. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of integers using Euclid's algorithm. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the greatest common divisor (gcd) of two big integers. + + First Integer: a. + Second Integer: b. + Greatest common divisor gcd(a,b) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Returns the greatest common divisor (gcd) of a set of big integers. + + List of Integers. + Greatest common divisor gcd(list of integers) + + + + Computes the extended greatest common divisor, such that a*x + b*y = gcd(a,b). + + First Integer: a. + Second Integer: b. + Resulting x, such that a*x + b*y = gcd(a,b). + Resulting y, such that a*x + b*y = gcd(a,b) + Greatest common divisor gcd(a,b) + + + long x,y,d; + d = Fn.GreatestCommonDivisor(45,18,out x, out y); + -> d == 9 && x == 1 && y == -2 + + The gcd of 45 and 18 is 9: 18 = 2*9, 45 = 5*9. 9 = 1*45 -2*18, therefore x=1 and y=-2. + + + + + Returns the least common multiple (lcm) of two big integers. + + First Integer: a. + Second Integer: b. + Least common multiple lcm(a,b) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Returns the least common multiple (lcm) of a set of big integers. + + List of Integers. + Least common multiple lcm(list of integers) + + + + Collection of functions equivalent to those provided by Microsoft Excel + but backed instead by Math.NET Numerics. + We do not recommend to use them except in an intermediate phase when + porting over solutions previously implemented in Excel. + + + + + An algorithm failed to converge. + + + + + An algorithm failed to converge due to a numerical breakdown. + + + + + An error occurred calling native provider function. + + + + + An error occurred calling native provider function. + + + + + Native provider was unable to allocate sufficient memory. + + + + + Native provider failed LU inversion do to a singular U matrix. + + + + + Compound Monthly Return or Geometric Return or Annualized Return + + + + + Average Gain or Gain Mean + This is a simple average (arithmetic mean) of the periods with a gain. It is calculated by summing the returns for gain periods (return 0) + and then dividing the total by the number of gain periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Average Loss or LossMean + This is a simple average (arithmetic mean) of the periods with a loss. It is calculated by summing the returns for loss periods (return < 0) + and then dividing the total by the number of loss periods. + + http://www.offshore-library.com/kb/statistics.php + + + + Calculation is similar to Standard Deviation , except it calculates an average (mean) return only for periods with a gain + and measures the variation of only the gain periods around the gain mean. Measures the volatility of upside performance. + © Copyright 1996, 1999 Gary L.Gastineau. First Edition. © 1992 Swiss Bank Corporation. + + + + + Similar to standard deviation, except this statistic calculates an average (mean) return for only the periods with a loss and then + measures the variation of only the losing periods around this loss mean. This statistic measures the volatility of downside performance. + + http://www.offshore-library.com/kb/statistics.php + + + + This measure is similar to the loss standard deviation except the downside deviation + considers only returns that fall below a defined minimum acceptable return (MAR) rather than the arithmetic mean. + For example, if the MAR is 7%, the downside deviation would measure the variation of each period that falls below + 7%. (The loss standard deviation, on the other hand, would take only losing periods, calculate an average return for + the losing periods, and then measure the variation between each losing return and the losing return average). + + + + + A measure of volatility in returns below the mean. It's similar to standard deviation, but it only + looks at periods where the investment return was less than average return. + + + + + Measures a fund’s average gain in a gain period divided by the fund’s average loss in a losing + period. Periods can be monthly or quarterly depending on the data frequency. + + + + + Find value x that minimizes the scalar function f(x), constrained within bounds, using the Golden Section algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Nelder-Mead Simplex algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + The missing gradient is evaluated numerically (forward difference). + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x) using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm. + For more options and diagnostics consider to use directly. + An alternative routine using conjugate gradients (CG) is available in . + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x), constrained within bounds, using the Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + + Find vector x that minimizes the function f(x) using the Newton algorithm. + For more options and diagnostics consider to use directly. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. + Maximum number of iterations. Example: 100. + + + + Find both complex roots of the quadratic equation c + b*x + a*x^2 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The coefficients of the polynomial in ascending order, e.g. new double[] {5, 0, 2} = "5 + 0 x^1 + 2 x^2" + The roots of the polynomial + + + + Find all roots of a polynomial by calculating the characteristic polynomial of the companion matrix + + The polynomial. + The roots of the polynomial + + + + Find all roots of the Chebychev polynomial of the first kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*(2i-1)/(2n)) + + + + Find all roots of the Chebychev polynomial of the second kind. + + The polynomial order and therefore the number of roots. + The real domain interval begin where to start sampling. + The real domain interval end where to stop sampling. + Samples in [a,b] at (b+a)/2+(b-1)/2*cos(pi*i/(n-1)) + + + + Least-Squares Curve Fitting Routines + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as [a, b] array, + where a is the intercept and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + + + + Least-Squares fitting the points (x,y) to a line through origin y : x -> b*x, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning its best fitting parameters as (a, r) tuple. + + + + + Least-Squares fitting the points (x,y) to an exponential y : x -> a*exp(r*x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a logarithm y : x -> a + b*ln(x), + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning its best fitting parameters as (a, b) tuple. + + + + + Least-Squares fitting the points (x,y) to a power y : x -> a*x^b, + returning a function y' for the best fitting line. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning a function y' for the best fitting polynomial. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Weighted Least-Squares fitting the points (x,y) and weights w to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array, compatible with Polynomial.Evaluate. + A polynomial with order/degree k has (k+1) coefficients and thus requires at least (k+1) samples. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (x,y) to an arbitrary linear combination y : x -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning a function y' for the best fitting combination. + If an intercept is added, its coefficient will be prepended to the resulting parameters. + + + + + Weighted Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) and weights w to a linear surface y : X -> p0*x0 + p1*x1 + ... + pk*xk, + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (X,y) = ((x0,x1,..,xk),y) to an arbitrary linear combination y : X -> p0*f0(x) + p1*f1(x) + ... + pk*fk(x), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning its best fitting parameters as [p0, p1, p2, ..., pk] array. + + + + + Least-Squares fitting the points (T,y) = (T,y) to an arbitrary linear combination y : X -> p0*f0(T) + p1*f1(T) + ... + pk*fk(T), + returning a function y' for the best fitting combination. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning its best fitting parameter p. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning its best fitting parameter p0 and p1. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning its best fitting parameter p0, p1 and p2. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, x), + returning a function y' for the best fitting curve. + + + + + Non-linear least-squares fitting the points (x,y) to an arbitrary function y : x -> f(p0, p1, p2, x), + returning a function y' for the best fitting curve. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate samples by sampling a function at the provided points. + + + + + Generate a sample sequence by sampling a function at the provided point sequence. + + + + + Generate a linearly spaced sample vector of the given length between the specified values (inclusive). + Equivalent to MATLAB linspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at linearly spaced points between the specified values (inclusive). + + + + + Generate a base 10 logarithmically spaced sample vector of the given length between the specified decade exponents (inclusive). + Equivalent to MATLAB logspace but with the length as first instead of last argument. + + + + + Generate samples by sampling a function at base 10 logarithmically spaced points between the specified decade exponents (inclusive). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and step 1. + Equivalent to MATLAB colon operator (:). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provided step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate a linearly spaced sample vector within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + Equivalent to MATLAB double colon operator (::). + + + + + Generate samples by sampling a function at linearly spaced points within the inclusive interval (start, stop) and the provide step. + The start value is aways included as first value, but stop is only included if it stop-start is a multiple of step. + + + + + Create a periodic wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic wave. + + The number of samples to generate. + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite periodic wave sequence. + + The function to apply to each of the values and evaluate the resulting sample. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The length of the period when sampled at one sample per time unit. This is the interval of the periodic domain, a typical value is 1.0, or 2*Pi for angular functions. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a Sine wave. + + The number of samples to generate. + Samples per time unit (Hz). Must be larger than twice the frequency to satisfy the Nyquist criterion. + Frequency in periods per time unit (Hz). + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create an infinite Sine wave sequence. + + Samples per unit. + Frequency in samples per unit. + The maximal reached peak. + The mean, or DC part, of the signal. + Optional phase offset. + Optional delay, relative to the phase. + + + + Create a periodic square wave, starting with the high phase. + + The number of samples to generate. + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create an infinite periodic square wave sequence, starting with the high phase. + + Number of samples of the high phase. + Number of samples of the low phase. + Sample value to be emitted during the low phase. + Sample value to be emitted during the high phase. + Optional delay. + + + + Create a periodic triangle wave, starting with the raise phase from the lowest sample. + + The number of samples to generate. + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic triangle wave sequence, starting with the raise phase from the lowest sample. + + Number of samples of the raise phase. + Number of samples of the fall phase. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create a periodic sawtooth wave, starting with the lowest sample. + + The number of samples to generate. + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an infinite periodic sawtooth wave sequence, starting with the lowest sample. + + Number of samples a full sawtooth period. + Lowest sample value. + Highest sample value. + Optional delay. + + + + Create an array with each field set to the same value. + + The number of samples to generate. + The value that each field should be set to. + + + + Create an infinite sequence where each element has the same value. + + The value that each element should be set to. + + + + Create a Heaviside Step sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. + + + + Create an infinite Heaviside Step sample sequence. + + The maximal reached peak. + Offset to the time axis. + + + + Create a Kronecker Delta impulse sample vector. + + The number of samples to generate. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + The maximal reached peak. + Offset to the time axis, hence the sample index of the impulse. + + + + Create a periodic Kronecker Delta impulse sample vector. + + The number of samples to generate. + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Create a Kronecker Delta impulse sample vector. + + impulse sequence period. + The maximal reached peak. + Offset to the time axis. Zero or positive. + + + + Generate samples generated by the given computation. + + + + + Generate an infinite sequence generated by the given computation. + + + + + Generate a Fibonacci sequence, including zero as first value. + + + + + Generate an infinite Fibonacci sequence, including zero as first value. + + + + + Create random samples, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create an infinite random sample sequence, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution, uniform between 0 and 1. + Faster than other methods but with reduced guarantees on randomness. + + + + + Create samples with independent amplitudes of standard distribution. + + + + + Create an infinite sample sequence with independent amplitudes of standard distribution. + + + + + Create samples with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create an infinite sample sequence with independent amplitudes of normal distribution and a flat spectral density. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Create random samples. + + + + + Create an infinite random sample sequence. + + + + + Generate samples by sampling a function at samples from a probability distribution. + + + + + Generate a sample sequence by sampling a function at samples from a probability distribution. + + + + + Generate samples by sampling a function at sample pairs from a probability distribution. + + + + + Generate a sample sequence by sampling a function at sample pairs from a probability distribution. + + + + + Globalized String Handling Helpers + + + + + Tries to get a from the format provider, + returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format + provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Tries to get a from the format provider, returning the current culture if it fails. + + + An that supplies culture-specific + formatting information. + + A instance. + + + + Globalized Parsing: Tokenize a node by splitting it into several nodes. + + Node that contains the trimmed string to be tokenized. + List of keywords to tokenize by. + keywords to skip looking for (because they've already been handled). + + + + Globalized Parsing: Parse a double number + + First token of the number. + Culture Info. + The parsed double number using the given culture information. + + + + + Globalized Parsing: Parse a float number + + First token of the number. + Culture Info. + The parsed float number using the given culture information. + + + + + Calculates r^2, the square of the sample correlation coefficient between + the observed outcomes and the observed predictor values. + Not to be confused with R^2, the coefficient of determination, see . + + The modelled/predicted values + The observed/actual values + Squared Person product-momentum correlation coefficient. + + + + Calculates r, the sample correlation coefficient between the observed outcomes + and the observed predictor values. + + The modelled/predicted values + The observed/actual values + Person product-momentum correlation coefficient. + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The Standard Error of the regression + + + + Calculates the Standard Error of the regression, given a sequence of + modeled/predicted values, and a sequence of actual/observed values + + The modelled/predicted values + The observed/actual values + The degrees of freedom by which the + number of samples is reduced for performing the Standard Error calculation + The Standard Error of the regression + + + + Calculates the R-Squared value, also known as coefficient of determination, + given some modelled and observed values. + + The values expected from the model. + The actual values obtained. + Coefficient of determination. + + + + Complex Fast (FFT) Implementation of the Discrete Fourier Transform (DFT). + + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the FFT is evaluated in place. + Imaginary part of the sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed from the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex forward Fast Fourier Transform (FFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to multiple dimensional sample data. + + Sample data, where the FFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to two dimensional sample data. + + Sample data, organized row by row, where the FFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the forward Fast Fourier Transform (FFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the FFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Spectrum data, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + + Real part of the sample vector, where the iFFT is evaluated in place. + Imaginary part of the sample vector, where the iFFT is evaluated in place. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Packed Real-Complex inverse Fast Fourier Transform (iFFT) to arbitrary-length sample vectors. + Since for real-valued time samples the complex spectrum is conjugate-even (symmetry), + the spectrum can be fully reconstructed form the positive frequencies only (first half). + The data array needs to be N+2 (if N is even) or N+1 (if N is odd) long in order to support such a packed spectrum. + + Data array of length N+2 (if N is even) or N+1 (if N is odd). + The number of samples. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to multiple dimensional sample data. + + Spectrum data, where the iFFT is evaluated in place. + + The data size per dimension. The first dimension is the major one. + For example, with two dimensions "rows" and "columns" the samples are assumed to be organized row by row. + + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to two dimensional sample data. + + Sample data, organized row by row, where the iFFT is evaluated in place + The number of rows. + The number of columns. + Data available organized column by column instead of row by row can be processed directly by swapping the rows and columns arguments. + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Applies the inverse Fast Fourier Transform (iFFT) to a two dimensional data in form of a matrix. + + Sample matrix, where the iFFT is evaluated in place + Fourier Transform Convention Options. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive forward DFT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Fourier Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Naive inverse DFT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Fourier Transform Convention Options. + Corresponding time-space vector. + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 forward FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Radix-2 inverse FFT for power-of-two sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein forward FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Bluestein inverse FFT for arbitrary sized sample vectors. + + Sample vector, where the FFT is evaluated in place. + Fourier Transform Convention Options. + + + + Generate the frequencies corresponding to each index in frequency space. + The frequency space has a resolution of sampleRate/N. + Index 0 corresponds to the DC part, the following indices correspond to + the positive frequencies up to the Nyquist frequency (sampleRate/2), + followed by the negative frequencies wrapped around. + + Number of samples. + The sampling rate of the time-space data. + + + + Fourier Transform Convention + + + + + Inverse integrand exponent (forward: positive sign; inverse: negative sign). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling and common exponent (used in Maple). + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction (used in Matlab). [= AsymmetricScaling] + + + + + Inverse integrand exponent; No scaling at all (used in all Numerical Recipes based implementations). [= InverseExponent | NoScaling] + + + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + Fast (FHT) Implementation of the Discrete Hartley Transform (DHT). + + + + + Naive forward DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Hartley Transform Convention Options. + Corresponding frequency-space vector. + + + + Naive inverse DHT, useful e.g. to verify faster algorithms. + + Frequency-space sample vector. + Hartley Transform Convention Options. + Corresponding time-space vector. + + + + Rescale FFT-the resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Rescale the iFFT-resulting vector according to the provided convention options. + + Fourier Transform Convention Options. + Sample Vector. + + + + Naive generic DHT, useful e.g. to verify faster algorithms. + + Time-space sample vector. + Corresponding frequency-space vector. + + + + Hartley Transform Convention + + + + + Only scale by 1/N in the inverse direction; No scaling in forward direction. + + + + + Don't scale at all (neither on forward nor on inverse transformation). + + + + + Universal; Symmetric scaling. + + + + + Numerical Integration (Quadrature). + + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function on a closed interval. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Legendre quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping. + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth function to integrate. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Numerical Contour Integration of a complex-valued function over a real variable,. + + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth complex function by double-exponential quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Approximation of the definite integral of an analytic smooth function by Gauss-Kronrod quadrature. When either or both limits are infinite, the integrand is assumed rapidly decayed to zero as x -> infinity. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts. + Where the interval stops. + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The expected relative accuracy of the approximation. + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + Approximation of the finite integral in the given interval. + + + + Analytic integration algorithm for smooth functions with no discontinuities + or derivative discontinuities and no poles inside the interval. + + + + + Maximum number of iterations, until the asked + maximum error is (likely to be) satisfied. + + + + + Approximate the integral by the double exponential transformation + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximate the integral by the double exponential transformation + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Compute the abscissa vector for a single level. + + The level to evaluate the abscissa vector for. + Abscissa Vector. + + + + Compute the weight vector for a single level. + + The level to evaluate the weight vector for. + Weight Vector. + + + + Precomputed abscissa vector per level. + + + + + Precomputed weight vector per level. + + + + + Getter for the order. + + + + + Getter that returns a clone of the array containing the Kronrod abscissas. + + + + + Getter that returns a clone of the array containing the Kronrod weights. + + + + + Getter that returns a clone of the array containing the Gauss weights. + + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth function to integrate + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + Performs adaptive Gauss-Kronrod quadrature on function f over the range (a,b) + + The analytic smooth complex function to integrate, defined on the real axis. + Where the interval starts + Where the interval stops + The difference between the (N-1)/2 point Gauss approximation and the N-point Gauss-Kronrod approximation + The L1 norm of the result, if there is a significant difference between this and the returned value, then the result is likely to be ill-conditioned. + The maximum relative error in the result + The maximum number of interval splittings permitted before stopping + The number of Gauss-Kronrod points. Pre-computed for 15, 21, 31, 41, 51 and 61 points + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + + Initializes a new instance of the class. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + + + + Gettter for the ith abscissa. + + Index of the ith abscissa. + The ith abscissa. + + + + Getter that returns a clone of the array containing the abscissas. + + + + + Getter for the ith weight. + + Index of the ith weight. + The ith weight. + + + + Getter that returns a clone of the array containing the weights. + + + + + Getter for the order. + + + + + Getter for the InvervalBegin. + + + + + Getter for the InvervalEnd. + + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth function to integrate. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a definite integral using an Nth order Gauss-Legendre rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, exclusive and finite. + Where the interval ends, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Approximates a 2-dimensional definite integral using an Nth order Gauss-Legendre rule over the rectangle [a,b] x [c,d]. + + The 2-dimensional analytic smooth function to integrate. + Where the interval starts for the first (inside) integral, exclusive and finite. + Where the interval ends for the first (inside) integral, exclusive and finite. + Where the interval starts for the second (outside) integral, exclusive and finite. + /// Where the interval ends for the second (outside) integral, exclusive and finite. + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Approximation of the finite integral in the given interval. + + + + Contains a method to compute the Gauss-Kronrod abscissas/weights and precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + Contains a method to compute the Gauss-Kronrod abscissas/weights. + + + + + Precomputed abscissas/weights for orders 15, 21, 31, 41, 51, 61. + + + + + Computes the Gauss-Kronrod abscissas/weights and Gauss weights. + + Defines an Nth order Gauss-Kronrod rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. + Object containing the non-negative abscissas/weights, order. + + + + Returns coefficients of a Stieltjes polynomial in terms of Legendre polynomials. + + + + + Return value and derivative of a Legendre series at given points. + + + + + Return value and derivative of a Legendre polynomial of order at given points. + + + + + Creates a Gauss-Kronrod point. + + + + + Getter for the GaussKronrodPoint. + + Defines an Nth order Gauss-Kronrod rule. Precomputed Gauss-Kronrod abscissas/weights for orders 15, 21, 31, 41, 51, 61 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, and order. + + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + Contains a method to compute the Gauss-Legendre abscissas/weights and precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Precomputed abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024. + + + + + Computes the Gauss-Legendre abscissas/weights. + See Pavel Holoborodko for a description of the algorithm. + + Defines an Nth order Gauss-Legendre rule. The order also defines the number of abscissas and weights for the rule. + Required precision to compute the abscissas/weights. 1e-10 is usually fine. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Creates and maps a Gauss-Legendre point. + + + + + Getter for the GaussPoint. + + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + + + + Getter for the GaussPoint. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Defines an Nth order Gauss-Legendre rule. Precomputed Gauss-Legendre abscissas/weights for orders 2-20, 32, 64, 96, 100, 128, 256, 512, 1024 are used, otherwise they're calculated on the fly. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Maps the non-negative abscissas/weights from the interval [-1, 1] to the interval [intervalBegin, intervalEnd]. + + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Object containing the non-negative abscissas/weights, order, and intervalBegin/intervalEnd. The non-negative abscissas/weights are generated over the interval [-1,1] for the given order. + Object containing the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + Contains the abscissas/weights, order, and intervalBegin/intervalEnd. + + + + + Contains two GaussPoint. + + + + + Approximation algorithm for definite integrals by the Trapezium rule of the Newton-Cotes family. + + + Wikipedia - Trapezium Rule + + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Direct 2-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, defined on real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral in the provided interval by the trapezium rule. + + The analytic smooth complex function to integrate, define don real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + The expected accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Adaptive approximation of the definite integral by the trapezium rule. + + The analytic smooth complex function to integrate, defined on the real domain. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Abscissa vector per level provider. + Weight vector per level provider. + First Level Step + The expected relative accuracy of the approximation. + Approximation of the finite integral in the given interval. + + + + Approximation algorithm for definite integrals by Simpson's rule. + + + + + Direct 3-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Approximation of the finite integral in the given interval. + + + + Composite N-point approximation of the definite integral in the provided interval by Simpson's rule. + + The analytic smooth function to integrate. + Where the interval starts, inclusive and finite. + Where the interval stops, inclusive and finite. + Even number of composite subdivision partitions. + Approximation of the finite integral in the given interval. + + + + Interpolation Factory. + + + + + Creates an interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Floater-Hormann rational pole-free interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolateRationalFloaterHormannSorted + instead, which is more efficient. + + + + + Create a Bulirsch Stoer rational interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.BulirschStoerRationalInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a barycentric polynomial interpolation where the given sample points are equidistant. + + The sample points t, must be equidistant. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.Barycentric.InterpolatePolynomialEquidistantSorted + instead, which is more efficient. + + + + + Create a Neville polynomial interpolation based on arbitrary points. + If the points happen to be equidistant, consider to use the much more robust PolynomialEquidistant instead. + Otherwise, consider whether RationalWithoutPoles would not be a more robust alternative. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.NevillePolynomialInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Create a piecewise linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LinearSpline.InterpolateSorted + instead, which is more efficient. + + + + + Create piecewise log-linear interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.LogLinear.InterpolateSorted + instead, which is more efficient. + + + + + Create an piecewise natural cubic spline interpolation based on arbitrary points, + with zero secondary derivatives at the boundaries. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateNaturalSorted + instead, which is more efficient. + + + + + Create an piecewise cubic Akima spline interpolation based on arbitrary points. + Akima splines are robust to outliers. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateAkimaSorted + instead, which is more efficient. + + + + + Create a piecewise cubic Hermite spline interpolation based on arbitrary points + and their slopes/first derivative. + + The sample points t. + The sample point values x(t). + The slope at the sample points. Optimized for arrays. + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.CubicSpline.InterpolateHermiteSorted + instead, which is more efficient. + + + + + Create a step-interpolation based on arbitrary points. + + The sample points t. + The sample point values x(t). + + An interpolation scheme optimized for the given sample points and values, + which can then be used to compute interpolations and extrapolations + on arbitrary points. + + + if your data is already sorted in arrays, consider to use + MathNet.Numerics.Interpolation.StepInterpolation.InterpolateSorted + instead, which is more efficient. + + + + + Barycentric Interpolation Algorithm. + + Supports neither differentiation nor integration. + + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + Barycentric weights (N), sorted ascendingly by x. + + + + Create a barycentric polynomial interpolation from a set of (x,y) value pairs with equidistant x, sorted ascendingly by x. + + + + + Create a barycentric polynomial interpolation from an unordered set of (x,y) value pairs with equidistant x. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a barycentric polynomial interpolation from an unsorted set of (x,y) value pairs with equidistant x. + + + + + Create a barycentric polynomial interpolation from a set of values related to linearly/equidistant spaced points within an interval. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + Order of the interpolation scheme, 0 <= order <= N. + In most cases a value between 3 and 8 gives good results. + + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + The values are assumed to be sorted ascendingly by x. + + Sample points (N), sorted ascendingly. + Sample values (N), sorted ascendingly by x. + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + WARNING: Works in-place and can thus causes the data array to be reordered. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Create a barycentric rational interpolation without poles, using Mike Floater and Kai Hormann's Algorithm. + + Sample points (N), no sorting assumed. + Sample values (N). + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Rational Interpolation (with poles) using Roland Bulirsch and Josef Stoer's Algorithm. + + + + This algorithm supports neither differentiation nor integration. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Bulirsch-Stoer rational interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Bulirsch-Stoer rational interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Cubic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + third order spline coefficients (N) + + + + Create a Hermite cubic spline interpolation from a set of (x,y) value pairs and their slope (first derivative), sorted ascendingly by x. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Hermite cubic spline interpolation from an unsorted set of (x,y) value pairs and their slope (first derivative). + + + + + Create an Akima cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + Akima splines are robust to outliers. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create an Akima cubic spline interpolation from an unsorted set of (x,y) value pairs. + Akima splines are robust to outliers. + + + + + Create a cubic spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x, + and custom boundary/termination conditions. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a cubic spline interpolation from an unsorted set of (x,y) value pairs and custom boundary/termination conditions. + + + + + Create a natural cubic spline interpolation from a set of (x,y) value pairs + and zero second derivatives at the two boundaries, sorted ascendingly by x. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a natural cubic spline interpolation from an unsorted set of (x,y) value pairs + and zero second derivatives at the two boundaries. + + + + + Three-Point Differentiation Helper. + + Sample Points t. + Sample Values x(t). + Index of the point of the differentiation. + Index of the first sample. + Index of the second sample. + Index of the third sample. + The derivative approximation. + + + + Tridiagonal Solve Helper. + + The a-vector[n]. + The b-vector[n], will be modified by this function. + The c-vector[n]. + The d-vector[n], will be modified by this function. + The x-vector[n] + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Interpolation within the range of a discrete set of known data points. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Piece-wise Linear Interpolation. + + Supports both differentiation and integration. + + + Sample points (N+1), sorted ascending + Sample values (N or N+1) at the corresponding points; intercept, zero order coefficients + Slopes (N) at the sample points (first order coefficients): N + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Piece-wise Log-Linear Interpolation + + This algorithm supports differentiation, not integration. + + + + Internal Spline Interpolation + + + + Sample points (N), sorted ascending + Natural logarithm of the sample values (N) at the corresponding points + + + + Create a piecewise log-linear interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a piecewise log-linear interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Lagrange Polynomial Interpolation using Neville's Algorithm. + + + + This algorithm supports differentiation, but doesn't support integration. + + + When working with equidistant or Chebyshev sample points it is + recommended to use the barycentric algorithms specialized for + these cases instead of this arbitrary Neville algorithm. + + + + + Sample Points t, sorted ascendingly. + Sample Values x(t), sorted ascendingly by x. + + + + Create a Neville polynomial interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a Neville polynomial interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Quadratic Spline Interpolation. + + Supports both differentiation and integration. + + + sample points (N+1), sorted ascending + Zero order spline coefficients (N) + First order spline coefficients (N) + second order spline coefficients (N) + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t, + or the left index of the closest segment for extrapolation. + + + + + Left and right boundary conditions. + + + + + Natural Boundary (Zero second derivative). + + + + + Parabolically Terminated boundary. + + + + + Fixed first derivative at the boundary. + + + + + Fixed second derivative at the boundary. + + + + + A step function where the start of each segment is included, and the last segment is open-ended. + Segment i is [x_i, x_i+1) for i < N, or [x_i, infinity] for i = N. + The domain of the function is all real numbers, such that y = 0 where x <. + + Supports both differentiation and integration. + + + Sample points (N), sorted ascending + Samples values (N) of each segment starting at the corresponding sample point. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. + + Point t to integrate at. + + + + Definite integral between points a and b. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + Find the index of the greatest sample point smaller than t. + + + + + Wraps an interpolation with a transformation of the interpolated values. + + Neither differentiation nor integration is supported. + + + + Create a linear spline interpolation from a set of (x,y) value pairs, sorted ascendingly by x. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + WARNING: Works in-place and can thus causes the data array to be reordered and modified. + + + + + Create a linear spline interpolation from an unsorted set of (x,y) value pairs. + + + + + Gets a value indicating whether the algorithm supports differentiation (interpolated derivative). + + + + + Gets a value indicating whether the algorithm supports integration (interpolated quadrature). + + + + + Interpolate at point t. + + Point t to interpolate at. + Interpolated value x(t). + + + + Differentiate at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated first derivative at point t. + + + + Differentiate twice at point t. NOT SUPPORTED. + + Point t to interpolate at. + Interpolated second derivative at point t. + + + + Indefinite integral at point t. NOT SUPPORTED. + + Point t to integrate at. + + + + Definite integral between points a and b. NOT SUPPORTED. + + Left bound of the integration interval [a,b]. + Right bound of the integration interval [a,b]. + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The divisor to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use, + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a double dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + Matrix V is encoded in the property EigenVectors in the way that: + - column corresponding to real eigenvalue represents real eigenvector, + - columns corresponding to the pair of complex conjugate eigenvalues + lambda[i] and lambda[i+1] encode real and imaginary parts of eigenvectors. + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + double version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a double. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + double version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiply this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply this one by. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a float dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real dense vector to float-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Symmetric Householder reduction to tridiagonal form. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an orthogonal matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Double value z1 + Double value z2 + Result multiplication of signum function and absolute value + + + + Swap column and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + Scalar "c" value + Scalar "s" value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + float version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Evaluates whether this matrix is symmetric. + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a float sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n,n,..', '(n,n,..)', '[n,n,...]', where n is a float. + + + A float sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a real sparse vector to float-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a real vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + float version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the Frobenius norm of this matrix. + The Frobenius norm of this matrix. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The type of QR factorization to perform. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex value z1 + Complex value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + A Matrix class with dense storage. The underlying storage is a one dimensional array in column-major order (column by column). + + + + + Number of rows. + + Using this instead of the RowCount property to speed up calculating + a matrix index in the data array. + + + + Number of columns. + + Using this instead of the ColumnCount property to speed up calculating + a matrix index in the data array. + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Gets the matrix's data. + + The matrix's data. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of add + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A vector using dense storage. + + + + + Number of elements + + + + + Gets the vector's data. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new dense vector directly binding to a raw array. + The array is used directly without copying. + Very efficient, but changes to the array and the vector will affect each other. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Gets the vector's data. + + The vector's data. + + + + Returns a reference to the internal data structure. + + The DenseVector whose internal data we are + returning. + + A reference to the internal date of the given vector. + + + + + Returns a vector bound directly to a reference of the provided array. + + The array to bind to the DenseVector object. + + A DenseVector whose values are bound to the given array. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts another vector from this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Multiplies a vector with a complex. + + The vector to scale. + The Complex32 value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The Complex32 value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The Complex32 value. + The result of the division. + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Creates a Complex32 dense vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a double. + + + A Complex32 dense vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex dense vector to double-precision dense vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + A matrix type for diagonal matrices. + + + Diagonal matrices can be non-square matrices but the diagonal always starts + at element 0,0. A diagonal matrix will throw an exception if non diagonal + entries are set. The exception to this is when the off diagonal elements are + 0.0 or NaN; these settings will cause no change to the diagonal matrix. + + + + + Gets the matrix's data. + + The matrix's data. + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns. + All diagonal cells of the matrix will be initialized to the provided value, all non-diagonal ones to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to contain the diagonal elements only and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + The matrix to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + The array to copy from must be diagonal as well. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value from the provided enumerable. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Create a new diagonal matrix with diagonal values sampled from the provided random distribution. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the division. + + + + Computes the determinant of this matrix. + + The determinant of this matrix. + + + + Returns the elements of the diagonal in a . + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + Calculates the condition number of this matrix. + The condition number of the matrix. + + + Computes the inverse of this matrix. + If is not a square matrix. + If is singular. + The inverse of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + Always thrown + Permutation in diagonal matrix are senseless, because of matrix nature + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for dense matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Factorize matrix using the modified Gram-Schmidt method. + + Initial matrix. On exit is replaced by Q. + Number of rows in Q. + Number of columns in Q. + On exit is filled by R. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Gets or sets Tau vector. Contains additional information on Q - used for native solver. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + If row count is less then column count + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + If SVD algorithm failed to converge with matrix . + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + A class which encapsulates the functionality of a Cholesky factorization for user matrices. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + + + + Computes the Cholesky factorization in-place. + + On entry, the matrix to factor. On exit, the Cholesky factor matrix + If is null. + If is not a square matrix. + If is not positive definite. + + + + Initializes a new instance of the class. This object will compute the + Cholesky factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + If is not positive definite. + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + If is null. + If is not a square matrix. + If is not positive definite. + If does not have the same dimensions as the existing factor. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a complex matrix. + + + If A is Hermitian, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is Hermitian. + I.e. A = V*D*V' and V*VH=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + + + + Initializes a new instance of the class. This object will compute the + the eigenvalue decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + If it is known whether the matrix is symmetric or not the routine can skip checking it itself. + If is null. + If EVD algorithm failed to converge with matrix . + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + The eigen vectors to work on. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + The eigen vectors to work on. + Previously tridiagonalized matrix by . + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + The eigen vectors to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + The eigen vectors to work on. + The eigen values to work on. + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any complex square matrix A may be decomposed as A = QR where Q is an unitary mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + + + + Initializes a new instance of the class. This object creates an unitary matrix + using the modified Gram-Schmidt method. + + The matrix to factor. + If is null. + If row count is less then column count + If is rank deficient + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + + + The computation of the LU factorization is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + LU factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + If is null. + If is not a square matrix. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + + + + + Initializes a new instance of the class. This object will compute the + QR factorization when the constructor is called and cache it's factorization. + + The matrix to factor. + The QR factorization method to use. + If is null. + + + + Generate column from initial matrix to work array + + Initial matrix + The first row + Column index + Generated vector + + + + Perform calculation of Q or R + + Work array + Q or R matrices + The first row + The last row + The first column + The last column + Number of available CPUs + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD) for . + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + + + + Initializes a new instance of the class. This object will compute the + the singular value decomposition when the constructor is called and cache it's decomposition. + + The matrix to factor. + Compute the singular U and VT vectors or not. + If is null. + + + + + Calculates absolute value of multiplied on signum function of + + Complex32 value z1 + Complex32 value z2 + Result multiplication of signum function and absolute value + + + + Interchanges two vectors and + + Source matrix + The number of rows in + Column A index to swap + Column B index to swap + + + + Scale column by starting from row + + Source matrix + The number of rows in + Column to scale + Row to scale from + Scale value + + + + Scale vector by starting from index + + Source vector + Row to scale from + Scale value + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Calculate Norm 2 of the column in matrix starting from row + + Source matrix + The number of rows in + Column index + Start row index + Norm2 (Euclidean norm) of the column + + + + Calculate Norm 2 of the vector starting from index + + Source vector + Start index + Norm2 (Euclidean norm) of the vector + + + + Calculate dot product of and conjugating the first vector. + + Source matrix + The number of rows in + Index of column A + Index of column B + Starting row index + Dot product value + + + + Performs rotation of points in the plane. Given two vectors x and y , + each vector element of these vectors is replaced as follows: x(i) = c*x(i) + s*y(i); y(i) = c*y(i) - s*x(i) + + Source matrix + The number of rows in + Index of column A + Index of column B + scalar cos value + scalar sin value + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Matrix class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar to divide by each element of the matrix. + The matrix to store the result of the division. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + A Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Bi-Conjugate Gradient Stabilized (BiCGStab) solver is an 'improvement' + of the standard Conjugate Gradient (CG) solver. Unlike the CG solver the + BiCGStab can be used on non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The Bi-CGSTAB algorithm was taken from:
+ Templates for the solution of linear systems: Building blocks + for iterative methods +
+ Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, + June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, + Charles Romine and Henk van der Vorst +
+ Url: http://www.netlib.org/templates/Templates.html +
+ Algorithm is described in Chapter 2, section 2.3.8, page 27 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient , A. + The solution , b. + The result , x. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A composite matrix solver. The actual solver is made by a sequence of + matrix solvers. + + + + Solver based on:
+ Faster PDE-based simulations using robust composite linear solvers
+ S. Bhowmicka, P. Raghavan a,*, L. McInnes b, B. Norris
+ Future Generation Computer Systems, Vol 20, 2004, pp 373�387
+
+ + Note that if an iterator is passed to this solver it will be used for all the sub-solvers. + +
+
+ + + The collection of solvers that will be used + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A diagonal preconditioner. The preconditioner uses the inverse + of the matrix diagonal as preconditioning values. + + + + + The inverse of the matrix diagonal. + + + + + Returns the decomposed matrix diagonal. + + The matrix diagonal. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + A Generalized Product Bi-Conjugate Gradient iterative matrix solver. + + + + The Generalized Product Bi-Conjugate Gradient (GPBiCG) solver is an + alternative version of the Bi-Conjugate Gradient stabilized (CG) solver. + Unlike the CG solver the GPBiCG solver can be used on + non-symmetric matrices.
+ Note that much of the success of the solver depends on the selection of the + proper preconditioner. +
+ + The GPBiCG algorithm was taken from:
+ GPBiCG(m,l): A hybrid of BiCGSTAB and GPBiCG methods with + efficiency and robustness +
+ S. Fujino +
+ Applied Numerical Mathematics, Volume 41, 2002, pp 107 - 117 +
+
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Indicates the number of BiCGStab steps should be taken + before switching. + + + + + Indicates the number of GPBiCG steps should be taken + before switching. + + + + + Gets or sets the number of steps taken with the BiCgStab algorithm + before switching over to the GPBiCG algorithm. + + + + + Gets or sets the number of steps taken with the GPBiCG algorithm + before switching over to the BiCgStab algorithm. + + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Decide if to do steps with BiCgStab + + Number of iteration + true if yes, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + An incomplete, level 0, LU factorization preconditioner. + + + The ILU(0) algorithm was taken from:
+ Iterative methods for sparse linear systems
+ Yousef Saad
+ Algorithm is described in Chapter 10, section 10.3.2, page 275
+
+
+ + + The matrix holding the lower (L) and upper (U) matrices. The + decomposition matrices are combined to reduce storage. + + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + A new matrix containing the lower triagonal elements. + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + This class performs an Incomplete LU factorization with drop tolerance + and partial pivoting. The drop tolerance indicates which additional entries + will be dropped from the factorized LU matrices. + + + The ILUTP-Mem algorithm was taken from:
+ ILUTP_Mem: a Space-Efficient Incomplete LU Preconditioner +
+ Tzu-Yi Chen, Department of Mathematics and Computer Science,
+ Pomona College, Claremont CA 91711, USA
+ Published in:
+ Lecture Notes in Computer Science
+ Volume 3046 / 2004
+ pp. 20 - 28
+ Algorithm is described in Section 2, page 22 +
+
+ + + The default fill level. + + + + + The default drop tolerance. + + + + + The decomposed upper triangular matrix. + + + + + The decomposed lower triangular matrix. + + + + + The array containing the pivot values. + + + + + The fill level. + + + + + The drop tolerance. + + + + + The pivot tolerance. + + + + + Initializes a new instance of the class with the default settings. + + + + + Initializes a new instance of the class with the specified settings. + + + The amount of fill that is allowed in the matrix. The value is a fraction of + the number of non-zero entries in the original matrix. Values should be positive. + + + The absolute drop tolerance which indicates below what absolute value an entry + will be dropped from the matrix. A drop tolerance of 0.0 means that no values + will be dropped. Values should always be positive. + + + The pivot tolerance which indicates at what level pivoting will take place. A + value of 0.0 means that no pivoting will take place. + + + + + Gets or sets the amount of fill that is allowed in the matrix. The + value is a fraction of the number of non-zero entries in the original + matrix. The standard value is 200. + + + + Values should always be positive and can be higher than 1.0. A value lower + than 1.0 means that the eventual preconditioner matrix will have fewer + non-zero entries as the original matrix. A value higher than 1.0 means that + the eventual preconditioner can have more non-zero values than the original + matrix. + + + Note that any changes to the FillLevel after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the absolute drop tolerance which indicates below what absolute value + an entry will be dropped from the matrix. The standard value is 0.0001. + + + + The values should always be positive and can be larger than 1.0. A low value will + keep more small numbers in the preconditioner matrix. A high value will remove + more small numbers from the preconditioner matrix. + + + Note that any changes to the DropTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Gets or sets the pivot tolerance which indicates at what level pivoting will + take place. The standard value is 0.0 which means pivoting will never take place. + + + + The pivot tolerance is used to calculate if pivoting is necessary. Pivoting + will take place if any of the values in a row is bigger than the + diagonal value of that row divided by the pivot tolerance, i.e. pivoting + will take place if row(i,j) > row(i,i) / PivotTolerance for + any j that is not equal to i. + + + Note that any changes to the PivotTolerance after creating the preconditioner + will invalidate the created preconditioner and will require a re-initialization of + the preconditioner. + + + Thrown if a negative value is provided. + + + + Returns the upper triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the upper triagonal elements. + + + + Returns the lower triagonal matrix that was created during the LU decomposition. + + + This method is used for debugging purposes only and should normally not be used. + + A new matrix containing the lower triagonal elements. + + + + Returns the pivot array. This array is not needed for normal use because + the preconditioner will return the solution vector values in the proper order. + + + This method is used for debugging purposes only and should normally not be used. + + The pivot array. + + + + Initializes the preconditioner and loads the internal data structures. + + + The upon which this preconditioner is based. Note that the + method takes a general matrix type. However internally the data is stored + as a sparse matrix. Therefore it is not recommended to pass a dense matrix. + + If is . + If is not a square matrix. + + + + Pivot elements in the according to internal pivot array + + Row to pivot in + + + + Was pivoting already performed + + Pivots already done + Current item to pivot + true if performed, otherwise false + + + + Swap columns in the + + Source . + First column index to swap + Second column index to swap + + + + Sort vector descending, not changing vector but placing sorted indices to + + Start sort form + Sort till upper bound + Array with sorted vector indices + Source + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Pivot elements in according to internal pivot array + + Source . + Result after pivoting. + + + + An element sort algorithm for the class. + + + This sort algorithm is used to sort the columns in a sparse matrix based on + the value of the element on the diagonal of the matrix. + + + + + Sorts the elements of the vector in decreasing + fashion. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Sorts the elements of the vector in decreasing + fashion using heap sort algorithm. The vector itself is not affected. + + The starting index. + The stopping index. + An array that will contain the sorted indices once the algorithm finishes. + The that contains the values that need to be sorted. + + + + Build heap for double indices + + Root position + Length of + Indices of + Target + + + + Sift double indices + + Indices of + Target + Root position + Length of + + + + Sorts the given integers in a decreasing fashion. + + The values. + + + + Sort the given integers in a decreasing fashion using heapsort algorithm + + Array of values to sort + Length of + + + + Build heap + + Target values array + Root position + Length of + + + + Sift values + + Target value array + Root position + Length of + + + + Exchange values in array + + Target values array + First value to exchange + Second value to exchange + + + + A simple milu(0) preconditioner. + + + Original Fortran code by Yousef Saad (07 January 2004) + + + + Use modified or standard ILU(0) + + + + Gets or sets a value indicating whether to use modified or standard ILU(0). + + + + + Gets a value indicating whether the preconditioner is initialized. + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix upon which the preconditioner is based. + If is . + If is not a square or is not an + instance of SparseCompressedRowMatrixStorage. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector b. + The left hand side vector x. + + + + MILU0 is a simple milu(0) preconditioner. + + Order of the matrix. + Matrix values in CSR format (input). + Column indices (input). + Row pointers (input). + Matrix values in MSR format (output). + Row pointers and column indices (output). + Pointer to diagonal elements (output). + True if the modified/MILU algorithm should be used (recommended) + Returns 0 on success or k > 0 if a zero pivot was encountered at step k. + + + + A Multiple-Lanczos Bi-Conjugate Gradient stabilized iterative matrix solver. + + + + The Multiple-Lanczos Bi-Conjugate Gradient stabilized (ML(k)-BiCGStab) solver is an 'improvement' + of the standard BiCgStab solver. + + + The algorithm was taken from:
+ ML(k)BiCGSTAB: A BiCGSTAB variant based on multiple Lanczos starting vectors +
+ Man-Chung Yeung and Tony F. Chan +
+ SIAM Journal of Scientific Computing +
+ Volume 21, Number 4, pp. 1263 - 1290 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + The default number of starting vectors. + + + + + The collection of starting vectors which are used as the basis for the Krylov sub-space. + + + + + The number of starting vectors used by the algorithm + + + + + Gets or sets the number of starting vectors. + + + Must be larger than 1 and smaller than the number of variables in the matrix that + for which this solver will be used. + + + + + Resets the number of starting vectors to the default value. + + + + + Gets or sets a series of orthonormal vectors which will be used as basis for the + Krylov sub-space. + + + + + Gets the number of starting vectors to create + + Maximum number + Number of variables + Number of starting vectors to create + + + + Returns an array of starting vectors. + + The maximum number of starting vectors that should be created. + The number of variables. + + An array with starting vectors. The array will never be larger than the + but it may be smaller if + the is smaller than + the . + + + + + Create random vectors array + + Number of vectors + Size of each vector + Array of random vectors + + + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Source A. + Residual data. + x data. + b data. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Transpose Free Quasi-Minimal Residual (TFQMR) iterative matrix solver. + + + + The TFQMR algorithm was taken from:
+ Iterative methods for sparse linear systems. +
+ Yousef Saad +
+ Algorithm is described in Chapter 7, section 7.4.3, page 219 +
+ + The example code below provides an indication of the possible use of the + solver. + +
+
+ + + Calculates the true residual of the matrix equation Ax = b according to: residual = b - Ax + + Instance of the A. + Residual values in . + Instance of the x. + Instance of the b. + + + + Is even? + + Number to check + true if even, otherwise false + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + A Matrix with sparse storage, intended for very large matrices where most of the cells are zero. + The underlying storage scheme is 3-array compressed-sparse-row (CSR) Format. + Wikipedia - CSR. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new square sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the order is less than one. + + + + Create a new sparse matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + If the row or column count is less than one. + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new square sparse identity matrix where each diagonal value is set to One. + + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract to this matrix. + The matrix to store the result of subtraction. + If the other matrix is . + If the two matrices don't have the same dimensions. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The matrix to pointwise divide this one by. + The matrix to store the result of the pointwise division. + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + A vector with sparse storage, intended for very large vectors where most of the cells are zero. + + The sparse vector is not thread safe. + + + + Gets the number of non zero elements in the vector. + + The number of non zero elements. + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new sparse vector with the given length. + All cells of the vector will be initialized to zero. + Zero-length vectors are not supported. + + If length is less than one. + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + Warning, the new 'sparse vector' with a non-zero scalar added to it will be a 100% filled + sparse vector and very inefficient. Would be better to work with a dense vector instead. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Negates vector and saves result to + + Target vector + + + + Conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Multiplies a vector with a complex. + + The vector to scale. + The complex value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a complex. + + The complex value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a vector with a complex. + + The vector to divide. + The complex value. + The result of the division. + If is . + + + + Computes the modulus of each element of the vector of the given divisor. + + The vector whose elements we want to compute the modulus of. + The divisor to use, + The result of the calculation + If is . + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = ( ∑|this[i]|^p )^(1/p) + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Creates a double sparse vector based on a string. The string can be in the following formats (without the + quotes): 'n', 'n;n;..', '(n;n;..)', '[n;n;...]', where n is a Complex32. + + + A double sparse vector containing the values specified by the given string. + + + the string to parse. + + + An that supplies culture-specific formatting information. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Converts the string representation of a complex sparse vector to double-precision sparse vector equivalent. + A return value indicates whether the conversion succeeded or failed. + + + A string containing a complex vector to convert. + + + An that supplies culture-specific formatting information about value. + + + The parsed value. + + + If the conversion succeeds, the result will contain a complex number equivalent to value. + Otherwise the result will be null. + + + + + Complex32 version of the class. + + + + + Initializes a new instance of the Vector class. + + + + + Set all values whose absolute value is smaller than the threshold to zero. + + + + + Conjugates vector and save result to + + Target vector + + + + Negates vector and saves result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to add. + + + The vector to store the result of the addition. + + + + + Adds another vector to this vector and stores the result into the result vector. + + + The vector to add to this one. + + + The vector to store the result of the addition. + + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + + The scalar to subtract. + + + The vector to store the result of the subtraction. + + + + + Subtracts another vector to this vector and stores the result into the result vector. + + + The vector to subtract from this one. + + + The vector to store the result of the subtraction. + + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + + The scalar to multiply. + + + The vector to store the result of the multiplication. + + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + + The scalar to divide with. + + + The vector to store the result of the division. + + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The vector to pointwise divide this one by. + The vector to store the result of the pointwise division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + + The p value. + + + Scalar ret = ( ∑|At(i)|^p )^(1/p) + + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + + The p value. + + + This vector normalized to a unit vector with respect to the p-norm. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Generic linear algebra type builder, for situations where a matrix or vector + must be created in a generic way. Usage of generic builders should not be + required in normal user code. + + + + + Gets the value of 0.0 for type T. + + + + + Gets the value of 1.0 for type T. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new matrix straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with the same kind and dimensions of the provided example. + + + + + Create a new matrix with the same kind of the provided example. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples. + + + + + Create a new matrix with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new dense matrix with values sampled from the provided random distribution. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new dense matrix with values sampled from the standard distribution with a system random source. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the standard distribution. + + + + + Create a new positive definite dense matrix where each value is the product + of two samples from the provided random distribution. + + + + + Create a new dense matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new dense matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new dense matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to be in column-major order (column by column) and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + + Create a new dense matrix and initialize each value to the same provided value. + + + + + Create a new dense matrix and initialize each value using the provided init function. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal dense matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new dense matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable. + The enumerable is assumed to be in column-major order (column by column). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix of T as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new dense matrix from a 2D array of existing matrices. + The matrices in the array are not required to be dense already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new sparse matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse matrix of T with the given number of rows and columns. + + The number of rows. + The number of columns. + + + + Create a new sparse matrix and initialize each value to the same provided value. + + + + + Create a new sparse matrix and initialize each value using the provided init function. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal sparse matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new diagonal dense identity matrix with a one-diagonal. + + + + + Create a new sparse matrix as a copy of the given other matrix. + This new matrix will be independent from the other matrix. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given two-dimensional array. + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable. + The enumerable is assumed to be in row-major order (row by row). + This new matrix will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + + Create a new sparse matrix with the given number of rows and columns as a copy of the given array. + The array is assumed to be in column-major order (column by column). + This new matrix will be independent from the provided array. + A new memory block will be allocated for storing the matrix. + + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable columns. + Each enumerable in the master enumerable specifies a column. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given column vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given enumerable of enumerable rows. + Each enumerable in the master enumerable specifies a row. + This new matrix will be independent from the enumerables. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row arrays. + This new matrix will be independent from the arrays. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix as a copy of the given row vectors. + This new matrix will be independent from the vectors. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new sparse matrix from a 2D array of existing matrices. + The matrices in the array are not required to be sparse already. + If the matrices do not align properly, they are placed on the top left + corner of their cell with the remaining fields left zero. + + + + + Create a new diagonal matrix straight from an initialized matrix storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a new diagonal matrix with the given number of rows and columns. + All cells of the matrix will be initialized to zero. + Zero-length matrices are not supported. + + + + + Create a new diagonal matrix with the given number of rows and columns directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new square diagonal matrix directly binding to a raw array. + The array is assumed to represent the diagonal values and is used directly without copying. + Very efficient, but changes to the array and the matrix will affect each other. + + + + + Create a new diagonal matrix and initialize each diagonal value to the same provided value. + + + + + Create a new diagonal matrix and initialize each diagonal value using the provided init function. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal identity matrix with a one-diagonal. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given vector. + This new matrix will be independent from the vector. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new diagonal matrix with the diagonal as a copy of the given array. + This new matrix will be independent from the array. + A new memory block will be allocated for storing the matrix. + + + + + Create a new vector straight from an initialized matrix storage instance. + If you have an instance of a discrete storage type instead, use their direct methods instead. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with the same kind and dimension of the provided example. + + + + + Create a new vector with the same kind of the provided example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new vector with a type that can represent and is closest to both provided samples and the dimensions of example. + + + + + Create a new vector with a type that can represent and is closest to both provided samples. + + + + + Create a new dense vector with values sampled from the provided random distribution. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector with values sampled from the standard distribution with a system random source. + + + + + Create a new dense vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a dense vector of T with the given size. + + The size of the vector. + + + + Create a dense vector of T that is directly bound to the specified array. + + + + + Create a new dense vector and initialize each value using the provided value. + + + + + Create a new dense vector and initialize each value using the provided init function. + + + + + Create a new dense vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new dense vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector straight from an initialized vector storage instance. + The storage is used directly without copying. + Intended for advanced scenarios where you're working directly with + storage for performance or interop reasons. + + + + + Create a sparse vector of T with the given size. + + The size of the vector. + + + + Create a new sparse vector and initialize each value using the provided value. + + + + + Create a new sparse vector and initialize each value using the provided init function. + + + + + Create a new sparse vector as a copy of the given other vector. + This new vector will be independent from the other vector. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given array. + This new vector will be independent from the array. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given enumerable. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + Create a new sparse vector as a copy of the given indexed enumerable. + Keys must be provided at most once, zero is assumed if a key is omitted. + This new vector will be independent from the enumerable. + A new memory block will be allocated for storing the vector. + + + + + A class which encapsulates the functionality of a Cholesky factorization. + For a symmetric, positive definite matrix A, the Cholesky factorization + is an lower triangular matrix L so that A = L*L'. + + + The computation of the Cholesky factorization is done at construction time. If the matrix is not symmetric + or positive definite, the constructor will throw an exception. + + Supported data types are double, single, , and . + + + + Gets the lower triangular form of the Cholesky matrix. + + + + + Gets the determinant of the matrix for which the Cholesky matrix was computed. + + + + + Gets the log determinant of the matrix for which the Cholesky matrix was computed. + + + + + Calculates the Cholesky factorization of the input matrix. + + The matrix to be factorized. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A Cholesky factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A Cholesky factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Eigenvalues and eigenvectors of a real matrix. + + + If A is symmetric, then A = V*D*V' where the eigenvalue matrix D is + diagonal and the eigenvector matrix V is orthogonal. + I.e. A = V*D*V' and V*VT=I. + If A is not symmetric, then the eigenvalue matrix D is block diagonal + with the real eigenvalues in 1-by-1 blocks and any complex eigenvalues, + lambda + i*mu, in 2-by-2 blocks, [lambda, mu; -mu, lambda]. The + columns of V represent the eigenvectors in the sense that A*V = V*D, + i.e. A.Multiply(V) equals V.Multiply(D). The matrix V may be badly + conditioned, or even singular, so the validity of the equation + A = V*D*Inverse(V) depends upon V.Condition(). + + Supported data types are double, single, , and . + + + + Gets or sets a value indicating whether matrix is symmetric or not + + + + + Gets the absolute value of determinant of the square matrix for which the EVD was computed. + + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Gets or sets the eigen values (λ) of matrix in ascending value. + + + + + Gets or sets eigenvectors. + + + + + Gets or sets the block diagonal eigenvalue matrix. + + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A EVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A EVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the QR decomposition Modified Gram-Schmidt Orthogonalization. + Any real square matrix A may be decomposed as A = QR where Q is an orthogonal mxn matrix and R is an nxn upper triangular matrix. + + + The computation of the QR decomposition is done at construction time by modified Gram-Schmidt Orthogonalization. + + Supported data types are double, single, , and . + + + + Classes that solves a system of linear equations, AX = B. + + Supported data types are double, single, , and . + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, AX = B. + + The right hand side Matrix, B. + The left hand side Matrix, X. + + + + Solves a system of linear equations, Ax = b + + The right hand side vector, b. + The left hand side Vector, x. + + + + Solves a system of linear equations, Ax = b. + + The right hand side vector, b. + The left hand side Matrix>, x. + + + + A class which encapsulates the functionality of an LU factorization. + For a matrix A, the LU factorization is a pair of lower triangular matrix L and + upper triangular matrix U so that A = L*U. + In the Math.Net implementation we also store a set of pivot elements for increased + numerical stability. The pivot elements encode a permutation matrix P such that P*A = L*U. + + + The computation of the LU factorization is done at construction time. + + Supported data types are double, single, , and . + + + + Gets the lower triangular factor. + + + + + Gets the upper triangular factor. + + + + + Gets the permutation applied to LU factorization. + + + + + Gets the determinant of the matrix for which the LU factorization was computed. + + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A LU factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A LU factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Returns the inverse of this matrix. The inverse is calculated using LU decomposition. + + The inverse of this matrix. + + + + The type of QR factorization go perform. + + + + + Compute the full QR factorization of a matrix. + + + + + Compute the thin QR factorization of a matrix. + + + + + A class which encapsulates the functionality of the QR decomposition. + Any real square matrix A (m x n) may be decomposed as A = QR where Q is an orthogonal matrix + (its columns are orthogonal unit vectors meaning QTQ = I) and R is an upper triangular matrix + (also called right triangular matrix). + + + The computation of the QR decomposition is done at construction time by Householder transformation. + If a factorization is performed, the resulting Q matrix is an m x m matrix + and the R matrix is an m x n matrix. If a factorization is performed, the + resulting Q matrix is an m x n matrix and the R matrix is an n x n matrix. + + Supported data types are double, single, , and . + + + + Gets or sets orthogonal Q matrix + + + + + Gets the upper triangular factor R. + + + + + Gets the absolute determinant value of the matrix for which the QR matrix was computed. + + + + + Gets a value indicating whether the matrix is full rank or not. + + true if the matrix is full rank; otherwise false. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + A class which encapsulates the functionality of the singular value decomposition (SVD). + Suppose M is an m-by-n matrix whose entries are real numbers. + Then there exists a factorization of the form M = UΣVT where: + - U is an m-by-m unitary matrix; + - Σ is m-by-n diagonal matrix with nonnegative real numbers on the diagonal; + - VT denotes transpose of V, an n-by-n unitary matrix; + Such a factorization is called a singular-value decomposition of M. A common convention is to order the diagonal + entries Σ(i,i) in descending order. In this case, the diagonal matrix Σ is uniquely determined + by M (though the matrices U and V are not). The diagonal entries of Σ are known as the singular values of M. + + + The computation of the singular value decomposition is done at construction time. + + Supported data types are double, single, , and . + + + Indicating whether U and VT matrices have been computed during SVD factorization. + + + + Gets the singular values (Σ) of matrix in ascending value. + + + + + Gets the left singular vectors (U - m-by-m unitary matrix) + + + + + Gets the transpose right singular vectors (transpose of V, an n-by-n unitary matrix) + + + + + Returns the singular values as a diagonal . + + The singular values as a diagonal . + + + + Gets the effective numerical matrix rank. + + The number of non-negligible singular values. + + + + Gets the two norm of the . + + The 2-norm of the . + + + + Gets the condition number max(S) / min(S) + + The condition number. + + + + Gets the determinant of the square matrix for which the SVD was computed. + + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A SVD factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, Ax = b, with A SVD factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + Supported data types are double, single, , and . + + Defines the base class for Matrix classes. + + + Defines the base class for Matrix classes. + + + + + The value of 1.0. + + + + + The value of 0.0. + + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + + + + Complex conjugates each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + + + + Add a scalar to each element of the matrix and stores the result in the result vector. + + The scalar to add. + The matrix to store the result of the addition. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar denominator to use. + The matrix to store the result of the division. + + + + Divides a scalar by each element of the matrix and stores the result in the result matrix. + + The scalar numerator to use. + The matrix to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given divisor each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the matrix. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise raise this matrix to an exponent matrix and store the result into the result matrix. + + The exponent matrix to raise this matrix values to. + The matrix to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix with another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result matrix. + + The matrix to store the result. + + + + Adds a scalar to each element of the matrix. + + The scalar to add. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds a scalar to each element of the matrix and stores the result in the result matrix. + + The scalar to add. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The result of the addition. + If the two matrices don't have the same dimensions. + + + + Adds another matrix to this matrix. + + The matrix to add to this matrix. + The matrix to store the result of the addition. + If the two matrices don't have the same dimensions. + + + + Subtracts a scalar from each element of the matrix. + + The scalar to subtract. + A new matrix containing the subtraction of this matrix and the scalar. + + + + Subtracts a scalar from each element of the matrix and stores the result in the result matrix. + + The scalar to subtract. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts each element of the matrix from a scalar. + + The scalar to subtract from. + A new matrix containing the subtraction of the scalar and this matrix. + + + + Subtracts each element of the matrix from a scalar and stores the result in the result matrix. + + The scalar to subtract from. + The matrix to store the result of the subtraction. + If this matrix and are not the same size. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Subtracts another matrix from this matrix. + + The matrix to subtract. + The matrix to store the result of the subtraction. + If the two matrices don't have the same dimensions. + + + + Multiplies each element of this matrix with a scalar. + + The scalar to multiply with. + The result of the multiplication. + + + + Multiplies each element of the matrix by a scalar and places results into the result matrix. + + The scalar to multiply the matrix with. + The matrix to store the result of the multiplication. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides each element of this matrix with a scalar. + + The scalar to divide with. + The result of the division. + + + + Divides each element of the matrix by a scalar and places results into the result matrix. + + The scalar to divide the matrix with. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Divides a scalar by each element of the matrix. + + The scalar to divide. + The result of the division. + + + + Divides a scalar by each element of the matrix and places results into the result matrix. + + The scalar to divide. + The matrix to store the result of the division. + If the result matrix's dimensions are not the same as this matrix. + + + + Multiplies this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.ColumnCount != rightSide.Count. + + + + Multiplies this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.RowCount. + If this.ColumnCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ). + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Left multiply a matrix with a vector ( = vector * matrix ) and place the result in the result vector. + + The vector to multiply with. + The result of the multiplication. + + + + Multiplies this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.Rows. + If the result matrix's dimensions are not the this.Rows x other.Columns. + + + + Multiplies this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.Rows. + The result of the multiplication. + + + + Multiplies this matrix with transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Multiplies this matrix with the conjugate transpose of another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Columns != other.ColumnCount. + If the result matrix's dimensions are not the this.RowCount x other.RowCount. + + + + Multiplies this matrix with the conjugate transpose of another matrix and returns the result. + + The matrix to multiply with. + If this.Columns != other.ColumnCount. + The result of the multiplication. + + + + Multiplies the conjugate transpose of this matrix by a vector and returns the result. + + The vector to multiply with. + The result of the multiplication. + If this.RowCount != rightSide.Count. + + + + Multiplies the conjugate transpose of this matrix with a vector and places the results into the result vector. + + The vector to multiply with. + The result of the multiplication. + If result.Count != this.ColumnCount. + If this.RowCount != .Count. + + + + Multiplies the conjugate transpose of this matrix with another matrix and places the results into the result matrix. + + The matrix to multiply with. + The result of the multiplication. + If this.Rows != other.RowCount. + If the result matrix's dimensions are not the this.ColumnCount x other.ColumnCount. + + + + Multiplies the conjugate transpose of this matrix with another matrix and returns the result. + + The matrix to multiply with. + If this.Rows != other.RowCount. + The result of the multiplication. + + + + Raises this square matrix to a positive integer exponent and places the results into the result matrix. + + The positive integer exponent to raise the matrix to. + The result of the power. + + + + Multiplies this square matrix with another matrix and returns the result. + + The positive integer exponent to raise the matrix to. + + + + Negate each element of this matrix. + + A matrix containing the negated values. + + + + Negate each element of this matrix and place the results into the result matrix. + + The result of the negation. + if the result matrix's dimensions are not the same as this matrix. + + + + Complex conjugate each element of this matrix. + + A matrix containing the conjugated values. + + + + Complex conjugate each element of this matrix and place the results into the result matrix. + + The result of the conjugation. + if the result matrix's dimensions are not the same as this matrix. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + A matrix containing the results. + + + + Computes the remainder (matrix % divisor), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar denominator to use. + Matrix to store the results in. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + A matrix containing the results. + + + + Computes the remainder (dividend % matrix), where the result has the sign of the dividend, + for each element of the matrix. + + The scalar numerator to use. + Matrix to store the results in. + + + + Pointwise multiplies this matrix with another matrix. + + The matrix to pointwise multiply with this one. + If this matrix and are not the same size. + A new matrix that is the pointwise multiplication of this matrix and . + + + + Pointwise multiplies this matrix with another matrix and stores the result into the result matrix. + + The matrix to pointwise multiply with this one. + The matrix to store the result of the pointwise multiplication. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise divide this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + A new matrix that is the pointwise division of this matrix and . + + + + Pointwise divide this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise division. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise raise this matrix to an exponent and store the result into the result matrix. + + The exponent to raise this matrix values to. + + + + Pointwise raise this matrix to an exponent. + + The exponent to raise this matrix values to. + The matrix to store the result into. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise modulus. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix. + + The pointwise denominator matrix to use. + If this matrix and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this matrix by another matrix and stores the result into the result matrix. + + The pointwise denominator matrix to use. + The matrix to store the result of the pointwise remainder. + If this matrix and are not the same size. + If this matrix and are not the same size. + + + + Helper function to apply a unary function to a matrix. The function + f modifies the matrix given to it in place. Before its + called, a copy of the 'this' matrix is first created, then passed to + f. The copy is then returned as the result + + Function which takes a matrix, modifies it in place and returns void + New instance of matrix which is the result + + + + Helper function to apply a unary function which modifies a matrix + in place. + + Function which takes a matrix, modifies it in place and returns void + The matrix to be passed to f and where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two matrices + and modifies the latter in place. A copy of the "this" matrix is + first made and then passed to f together with the other matrix. The + copy is then returned as the result + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The resulting matrix + If this matrix and are not the same dimension. + + + + Helper function to apply a binary function which takes two matrices + and modifies the second one in place + + Function which takes two matrices, modifies the second in place and returns void + The other matrix to be passed to the function as argument. It is not modified + The matrix to store the result. + The resulting matrix + If this matrix and are not the same dimension. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + + + + + + Pointwise applies the atan2 function to each value of the current + matrix and a given other matrix being the 'x' of atan2 and the + 'this' matrix being the 'y' + + The other matrix 'y' + The matrix with the result and 'x' + + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the trace of this matrix. + + The trace of this matrix + If the matrix is not square + + + + Calculates the rank of the matrix. + + effective numerical rank, obtained from SVD + + + + Calculates the nullity of the matrix. + + effective numerical nullity, obtained from SVD + + + Calculates the condition number of this matrix. + The condition number of the matrix. + The condition number is calculated using singular value decomposition. + + + Computes the determinant of this matrix. + The determinant of this matrix. + + + + Computes an orthonormal basis for the null space of this matrix, + also known as the kernel of the corresponding matrix transformation. + + + + + Computes an orthonormal basis for the column space of this matrix, + also known as the range or image of the corresponding matrix transformation. + + + + Computes the inverse of this matrix. + The inverse of this matrix. + + + Computes the Moore-Penrose Pseudo-Inverse of this matrix. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + + + + Computes the Kronecker product of this matrix with the given matrix. The new matrix is M-by-N + with M = this.Rows * lower.Rows and N = this.Columns * lower.Columns. + + The other matrix. + The Kronecker product of the two matrices. + If the result matrix's dimensions are not (this.Rows * lower.rows) x (this.Columns * lower.Columns). + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another matrix to each value. + + The matrix with the values to compare to. + The matrix to store the result. + If this matrix and are not the same size. + + + Calculates the induced L1 norm of this matrix. + The maximum absolute column sum of the matrix. + + + Calculates the induced L2 norm of the matrix. + The largest singular value of the matrix. + + For sparse matrices, the L2 norm is computed using a dense implementation of singular value decomposition. + In a later release, it will be replaced with a sparse implementation. + + + + Calculates the induced infinity norm of this matrix. + The maximum absolute row sum of the matrix. + + + Calculates the entry-wise Frobenius norm of this matrix. + The square root of the sum of the squared values. + + + + Calculates the p-norms of all row vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the p-norms of all column vectors. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all row vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Normalizes all column vectors to a unit p-norm. + Typical values for p are 1.0 (L1, Manhattan norm), 2.0 (L2, Euclidean norm) and positive infinity (infinity norm) + + + + + Calculates the value sum of each row vector. + + + + + Calculates the value sum of each column vector. + + + + + Calculates the absolute value sum of each row vector. + + + + + Calculates the absolute value sum of each column vector. + + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Returns a string that describes the type, dimensions and shape of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string 2D array that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes the content of this matrix. + + + + + Returns a string that summarizes this matrix. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this matrix. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Matrix class. + + + + + Gets the raw matrix data storage. + + + + + Gets the number of columns. + + The number of columns. + + + + Gets the number of rows. + + The number of rows. + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + + + + Sets the value of the given element without range checking. + + + The row of the element. + + + The column of the element. + + + The value to set the element to. + + + + + Sets all values to zero. + + + + + Sets all values of a row to zero. + + + + + Sets all values of a column to zero. + + + + + Sets all values for all of the chosen rows to zero. + + + + + Sets all values for all of the chosen columns to zero. + + + + + Sets all values of a sub-matrix to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Creates a clone of this instance. + + + A clone of the instance. + + + + + Copies the elements of this matrix to the given matrix. + + + The matrix to copy values into. + + + If target is . + + + If this and the target matrix do not have the same dimensions.. + + + + + Copies a row into an Vector. + + The row to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of rows. + + + + Copies a row into to the given Vector. + + The row to copy. + The Vector to copy the row into. + If the result vector is . + If is negative, + or greater than or equal to the number of rows. + If this.Columns != result.Count. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of rows. + is negative, + or greater than or equal to the number of columns. + (columnIndex + length) >= Columns. + If is not positive. + + + + Copies the requested row elements into a new Vector. + + The row to copy elements from. + The column to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Copies a column into a new Vector>. + + The column to copy. + A Vector containing the copied elements. + If is negative, + or greater than or equal to the number of columns. + + + + Copies a column into to the given Vector. + + The column to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If this.Rows != result.Count. + + + + Copies the requested column elements into a new Vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + A Vector containing the requested elements. + If: + is negative, + or greater than or equal to the number of columns. + is negative, + or greater than or equal to the number of rows. + (rowIndex + length) >= Rows. + + If is not positive. + + + + Copies the requested column elements into the given vector. + + The column to copy elements from. + The row to start copying from. + The number of elements to copy. + The Vector to copy the column into. + If the result Vector is . + If is negative, + or greater than or equal to the number of columns. + If is negative, + or greater than or equal to the number of rows. + If + + is greater than or equal to the number of rows. + If is not positive. + If result.Count < length. + + + + Returns a new matrix containing the upper triangle of this matrix. + + The upper triangle of this matrix. + + + + Returns a new matrix containing the lower triangle of this matrix. + + The lower triangle of this matrix. + + + + Puts the lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Puts the upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a matrix that contains the values from the requested sub-matrix. + + The row to start copying from. + The number of rows to copy. Must be positive. + The column to start copying from. + The number of columns to copy. Must be positive. + The requested sub-matrix. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + If or + is not positive. + + + + Returns the elements of the diagonal in a Vector. + + The elements of the diagonal. + For non-square matrices, the method returns Min(Rows, Columns) elements where + i == j (i is the row index, and j is the column index). + + + + Returns a new matrix containing the lower triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The lower triangle of this matrix. + + + + Puts the strictly lower triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Returns a new matrix containing the upper triangle of this matrix. The new matrix + does not contain the diagonal elements of this matrix. + + The upper triangle of this matrix. + + + + Puts the strictly upper triangle of this matrix into the result matrix. + + Where to store the lower triangle. + If is . + If the result matrix's dimensions are not the same as this matrix. + + + + Creates a new matrix and inserts the given column at the given index. + + The index of where to insert the column. + The column to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of columns. + If the size of != the number of rows. + + + + Creates a new matrix with the given column removed. + + The index of the column to remove. + A new matrix without the chosen column. + If is < zero or >= the number of columns. + + + + Copies the values of the given Vector to the specified column. + + The column to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given Vector to the specified sub-column. + + The column to copy the values to. + The row to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + + + + Copies the values of the given array to the specified column. + + The column to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of columns. + If the size of does not + equal the number of rows of this Matrix. + If the size of does not + equal the number of rows of this Matrix. + + + + Creates a new matrix and inserts the given row at the given index. + + The index of where to insert the row. + The row to insert. + A new matrix with the inserted column. + If is . + If is < zero or > the number of rows. + If the size of != the number of columns. + + + + Creates a new matrix with the given row removed. + + The index of the row to remove. + A new matrix without the chosen row. + If is < zero or >= the number of rows. + + + + Copies the values of the given Vector to the specified row. + + The row to copy the values to. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given Vector to the specified sub-row. + + The row to copy the values to. + The column to start copying to. + The number of elements to copy. + The vector to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of the given array to the specified row. + + The row to copy the values to. + The array to copy the values from. + If is . + If is less than zero, + or greater than or equal to the number of rows. + If the size of does not + equal the number of columns of this Matrix. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The column to start copying to. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The number of rows to copy. Must be positive. + The column to start copying to. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of a given matrix into a region in this matrix. + + The row to start copying to. + The row of the sub-matrix to start copying from. + The number of rows to copy. Must be positive. + The column to start copying to. + The column of the sub-matrix to start copying from. + The number of columns to copy. Must be positive. + The sub-matrix to copy from. + If: is + negative, or greater than or equal to the number of rows. + is negative, or greater than or equal to the number + of columns. + (columnIndex + columnLength) >= Columns + (rowIndex + rowLength) >= Rows + the size of is not at least x . + If or + is not positive. + + + + Copies the values of the given Vector to the diagonal. + + The vector to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Copies the values of the given array to the diagonal. + + The array to copy the values from. The length of the vector should be + Min(Rows, Columns). + If is . + If the length of does not + equal Min(Rows, Columns). + For non-square matrices, the elements of are copied to + this[i,i]. + + + + Returns the transpose of this matrix. + + The transpose of this matrix. + + + + Puts the transpose of this matrix into the result matrix. + + + + + Returns the conjugate transpose of this matrix. + + The conjugate transpose of this matrix. + + + + Puts the conjugate transpose of this matrix into the result matrix. + + + + + Permute the rows of a matrix according to a permutation. + + The row permutation to apply to this matrix. + + + + Permute the columns of a matrix according to a permutation. + + The column permutation to apply to this matrix. + + + + Concatenates this matrix with the given matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Concatenates this matrix with the given matrix and places the result into the result matrix. + + The matrix to concatenate. + The combined matrix. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Stacks this matrix on top of the given matrix and places the result into the result matrix. + + The matrix to stack this matrix upon. + The combined matrix. + If lower is . + If upper.Columns != lower.Columns. + + + + + + Diagonally stacks his matrix on top of the given matrix. The new matrix is a M-by-N matrix, + where M = this.Rows + lower.Rows and N = this.Columns + lower.Columns. + The values of off the off diagonal matrices/blocks are set to zero. + + The lower, right matrix. + If lower is . + the combined matrix + + + + + + Diagonally stacks his matrix on top of the given matrix and places the combined matrix into the result matrix. + + The lower, right matrix. + The combined matrix + If lower is . + If the result matrix is . + If the result matrix's dimensions are not (this.Rows + lower.rows) x (this.Columns + lower.Columns). + + + + + + Evaluates whether this matrix is symmetric. + + + + + Evaluates whether this matrix is Hermitian (conjugate symmetric). + + + + + Returns this matrix as a multidimensional array. + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + + A multidimensional containing the values of this matrix. + + + + Returns the matrix's elements as an array with the data laid out column by column (column major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the matrix's elements as an array with the data laid row by row (row major). + The returned array will be independent from this matrix. + A new memory block will be allocated for the array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns this matrix as array of row arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns this matrix as array of column arrays. + The returned arrays will be independent from this matrix. + A new memory block will be allocated for the arrays. + + + + + Returns the internal multidimensional array of this matrix if, and only if, this matrix is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the matrix will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Returns the internal column by column (column major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 4, 7, 2, 5, 8, 3, 6, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row by row (row major) array of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowMajorArray instead if you always need an independent array. + +
+            1, 2, 3
+            4, 5, 6  will be returned as  1, 2, 3, 4, 5, 6, 7, 8, 9
+            7, 8, 9
+            
+ An array containing the matrix's elements. + + +
+ + + Returns the internal row arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToRowArrays instead if you always need an independent array. + + + + + Returns the internal column arrays of this matrix if, and only if, this matrix is stored by such arrays internally. + Otherwise returns null. Changes to the returned arrays and the matrix will affect each other. + Use ToColumnArrays instead if you always need an independent array. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix. + + + The enumerator will include all values, even if they are zero. + The ordering of the values is unspecified (not necessarily column-wise or row-wise). + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the matrix and their index. + + + The enumerator returns a Tuple with the first two values being the row and column index + and the third value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix. + + The column to start enumerating over. + The number of columns to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all columns of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all columns of the matrix and their index. + + The column to start enumerating over. + The number of columns to enumerating over. + + The enumerator returns a Tuple with the first value being the column index + and the second value being the value of the column at that index. + + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix. + + The row to start enumerating over. + The number of rows to enumerating over. + + + + Returns an IEnumerable that can be used to iterate through all rows of the matrix and their index. + + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Returns an IEnumerable that can be used to iterate through a subset of all rows of the matrix and their index. + + The row to start enumerating over. + The number of rows to enumerating over. + + The enumerator returns a Tuple with the first value being the row index + and the second value being the value of the row at that index. + + + + + Applies a function to each value of this matrix and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value with its result. + The row and column indices of each value (zero-based) are passed as first arguments to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and replaces the value in the result matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + Applies a function to each value of this matrix and returns the results as a new matrix. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse matrices). + + + + + For each row, applies a function f to each element of the row, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each row. + + + + + For each column, applies a function f to each element of the column, threading an accumulator argument through the computation. + Returns an array with the resulting accumulator states for each column. + + + + + Applies a function f to each row vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Applies a function f to each column vector, threading an accumulator vector argument through the computation. + Returns the resulting accumulator vector. + + + + + Reduces all row vectors by applying a function between two of them, until only a single vector is left. + + + + + Reduces all column vectors by applying a function between two of them, until only a single vector is left. + + + + + Applies a function to each value pair of two matrices and replaces the value in the result vector. + + + + + Applies a function to each value pair of two matrices and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two matrices and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two matrices of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two matrices of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two matrices of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Matrix containing the same values of . + + The matrix to get the values from. + A matrix containing a the same values as . + If is . + + + + Negates each element of the matrix. + + The matrix to negate. + A matrix containing the negated values. + If is . + + + + Adds two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to add. + The right matrix to add. + The result of the addition. + If and don't have the same dimensions. + If or is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to add. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of the matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to add. + The right matrix to add. + The result of the addition. + If is . + + + + Subtracts two matrices together and returns the results. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts a scalar from each element of a matrix. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The left matrix to subtract. + The scalar value to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Subtracts each element of a matrix from a scalar. + + This operator will allocate new memory for the result. It will + choose the representation of the provided matrix. + The scalar value to subtract. + The right matrix to subtract. + The result of the subtraction. + If and don't have the same dimensions. + If or is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies a Matrix by a constant and returns the result. + + The matrix to multiply. + The constant to multiply the matrix by. + The result of the multiplication. + If is . + + + + Multiplies two matrices. + + This operator will allocate new memory for the result. It will + choose the representation of either or depending on which + is denser. + The left matrix to multiply. + The right matrix to multiply. + The result of multiplication. + If or is . + If the dimensions of or don't conform. + + + + Multiplies a Matrix and a Vector. + + The matrix to multiply. + The vector to multiply. + The result of multiplication. + If or is . + + + + Multiplies a Vector and a Matrix. + + The vector to multiply. + The matrix to multiply. + The result of multiplication. + If or is . + + + + Divides a scalar with a matrix. + + The scalar to divide. + The matrix. + The result of the division. + If is . + + + + Divides a matrix with a scalar. + + The matrix to divide. + The scalar value. + The result of the division. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of the matrix of the given divisor. + + The matrix whose elements we want to compute the modulus of. + The divisor to use. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the matrix. + + The dividend we want to compute the modulus of. + The matrix whose elements we want to use as divisor. + The result of the calculation + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two matrices. + + The matrix whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a matrix pointwise + + The input matrix + + + + + Computes the exponential of a matrix pointwise + + The input matrix + + + + + Computes the log of a matrix pointwise + + The input matrix + + + + + Computes the log10 of a matrix pointwise + + The input matrix + + + + + Computes the sin of a matrix pointwise + + The input matrix + + + + + Computes the cos of a matrix pointwise + + The input matrix + + + + + Computes the tan of a matrix pointwise + + The input matrix + + + + + Computes the asin of a matrix pointwise + + The input matrix + + + + + Computes the acos of a matrix pointwise + + The input matrix + + + + + Computes the atan of a matrix pointwise + + The input matrix + + + + + Computes the sinh of a matrix pointwise + + The input matrix + + + + + Computes the cosh of a matrix pointwise + + The input matrix + + + + + Computes the tanh of a matrix pointwise + + The input matrix + + + + + Computes the absolute value of a matrix pointwise + + The input matrix + + + + + Computes the floor of a matrix pointwise + + The input matrix + + + + + Computes the ceiling of a matrix pointwise + + The input matrix + + + + + Computes the rounded value of a matrix pointwise + + The input matrix + + + + + Computes the Cholesky decomposition for a matrix. + + The Cholesky decomposition object. + + + + Computes the LU decomposition for a matrix. + + The LU decomposition object. + + + + Computes the QR decomposition for a matrix. + + The type of QR factorization to perform. + The QR decomposition object. + + + + Computes the QR decomposition for a matrix using Modified Gram-Schmidt Orthogonalization. + + The QR decomposition object. + + + + Computes the SVD decomposition for a matrix. + + Compute the singular U and VT vectors or not. + The SVD decomposition object. + + + + Computes the EVD decomposition for a matrix. + + The EVD decomposition object. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, AX = B, with A QR factorized. + + The right hand side , B. + The left hand side , X. + + + + Solves a system of linear equations, Ax = b, with A QR factorized. + + The right hand side vector, b. + The left hand side , x. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The result vector x. + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The result matrix X + The iterative solver to use. + Criteria to control when to stop iterating. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The preconditioner to use for approximations. + The result matrix X. + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix (this matrix), b is the solution vector and x is the unknown vector. + + The solution vector b. + The iterative solver to use. + Criteria to control when to stop iterating. + The result vector x. + + + + Solves the matrix equation AX = B, where A is the coefficient matrix (this matrix), B is the solution matrix and X is the unknown matrix. + + The solution matrix B. + The iterative solver to use. + Criteria to control when to stop iterating. + The result matrix X. + + + + Converts a matrix to single precision. + + + + + Converts a matrix to double precision. + + + + + Converts a matrix to single precision complex numbers. + + + + + Converts a matrix to double precision complex numbers. + + + + + Gets a single precision complex matrix with the real parts from the given matrix. + + + + + Gets a double precision complex matrix with the real parts from the given matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the real parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Gets a real matrix representing the imaginary parts of a complex matrix. + + + + + Existing data may not be all zeros, so clearing may be necessary + if not all of it will be overwritten anyway. + + + + + If existing data is assumed to be all zeros already, + clearing it may be skipped if applicable. + + + + + Allow skipping zero entries (without enforcing skipping them). + When enumerating sparse matrices this can significantly speed up operations. + + + + + Force applying the operation to all fields even if they are zero. + + + + + It is not known yet whether a matrix is symmetric or not. + + + + + A matrix is symmetric + + + + + A matrix is Hermitian (conjugate symmetric). + + + + + A matrix is not symmetric + + + + + Defines an that uses a cancellation token as stop criterion. + + + + + Initializes a new instance of the class. + + + + + Initializes a new instance of the class. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Stop criterion that delegates the status determination to a delegate. + + + + + Create a new instance of this criterion with a custom implementation. + + Custom implementation with the same signature and semantics as the DetermineStatus method. + + + + Determines the status of the iterative calculation by delegating it to the provided delegate. + Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + + + + Clones this criterion and its settings. + + + + + Monitors an iterative calculation for signs of divergence. + + + + + The maximum relative increase the residual may experience without triggering a divergence warning. + + + + + The number of iterations over which a residual increase should be tracked before issuing a divergence warning. + + + + + The status of the calculation + + + + + The array that holds the tracking information. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified maximum + relative increase and the specified minimum number of tracking iterations. + + The maximum relative increase that the residual may experience before a divergence warning is issued. + The minimum number of iterations over which the residual must grow before a divergence warning is issued. + + + + Gets or sets the maximum relative increase that the residual may experience before a divergence warning is issued. + + Thrown if the Maximum is set to zero or below. + + + + Gets or sets the minimum number of iterations over which the residual must grow before + issuing a divergence warning. + + Thrown if the value is set to less than one. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Detect if solution is diverging + + true if diverging, otherwise false + + + + Gets required history Length + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Defines an that monitors residuals for NaN's. + + + + + The status of the calculation + + + + + The iteration number of the last iteration. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + The base interface for classes that provide stop criteria for iterative calculations. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current IIterationStopCriterion. Status is set to Status field of current object. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + is not a legal value. Status should be set in implementation. + + + + Resets the IIterationStopCriterion to the pre-calculation state. + + To implementers: Invoking this method should not clear the user defined + property values, only the state that is used to track the progress of the + calculation. + + + + Defines the interface for classes that solve the matrix equation Ax = b in + an iterative manner. + + + + + Solves the matrix equation Ax = b, where A is the coefficient matrix, b is the + solution vector and x is the unknown vector. + + The coefficient matrix, A. + The solution vector, b + The result vector, x + The iterator to use to control when to stop iterating. + The preconditioner to use for approximations. + + + + Defines the interface for objects that can create an iterative solver with + specific settings. This interface is used to pass iterative solver creation + setup information around. + + + + + Gets the type of the solver that will be created by this setup object. + + + + + Gets type of preconditioner, if any, that will be created by this setup object. + + + + + Creates the iterative solver to be used. + + + + + Creates the preconditioner to be used by default (can be overwritten). + + + + + Gets the relative speed of the solver. + + Returns a value between 0 and 1, inclusive. + + + + Gets the relative reliability of the solver. + + Returns a value between 0 and 1 inclusive. + + + + The base interface for preconditioner classes. + + + + Preconditioners are used by iterative solvers to improve the convergence + speed of the solving process. Increase in convergence speed + is related to the number of iterations necessary to get a converged solution. + So while in general the use of a preconditioner means that the iterative + solver will perform fewer iterations it does not guarantee that the actual + solution time decreases given that some preconditioners can be expensive to + setup and run. + + + Note that in general changes to the matrix will invalidate the preconditioner + if the changes occur after creating the preconditioner. + + + + + + Initializes the preconditioner and loads the internal data structures. + + The matrix on which the preconditioner is based. + + + + Approximates the solution to the matrix equation Mx = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + + Defines an that monitors the numbers of iteration + steps as stop criterion. + + + + + The default value for the maximum number of iterations the process is allowed + to perform. + + + + + The maximum number of iterations the calculation is allowed to perform. + + + + + The status of the calculation + + + + + Initializes a new instance of the class with the default maximum + number of iterations. + + + + + Initializes a new instance of the class with the specified maximum + number of iterations. + + The maximum number of iterations the calculation is allowed to perform. + + + + Gets or sets the maximum number of iterations the calculation is allowed to perform. + + Thrown if the Maximum is set to a negative value. + + + + Returns the maximum number of iterations to the default. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Iterative Calculation Status + + + + + An iterator that is used to check if an iterative calculation should continue or stop. + + + + + The collection that holds all the stop criteria and the flag indicating if they should be added + to the child iterators. + + + + + The status of the iterator. + + + + + Initializes a new instance of the class with the default stop criteria. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Initializes a new instance of the class with the specified stop criteria. + + + The specified stop criteria. Only one stop criterion of each type can be passed in. None + of the stop criteria will be passed on to child iterators. + + + + + Gets the current calculation status. + + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual iterators may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Indicates to the iterator that the iterative process has been cancelled. + + + Does not reset the stop-criteria. + + + + + Resets the to the pre-calculation state. + + + + + Creates a deep clone of the current iterator. + + The deep clone of the current iterator. + + + + Defines an that monitors residuals as stop criterion. + + + + + The maximum value for the residual below which the calculation is considered converged. + + + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + The status of the calculation + + + + + The number of iterations since the residuals got below the maximum. + + + + + The iteration number of the last iteration. + + + + + Initializes a new instance of the class with the specified + maximum residual and minimum number of iterations. + + + The maximum value for the residual below which the calculation is considered converged. + + + The minimum number of iterations for which the residual has to be below the maximum before + the calculation is considered converged. + + + + + Gets or sets the maximum value for the residual below which the calculation is considered + converged. + + Thrown if the Maximum is set to a negative value. + + + + Gets or sets the minimum number of iterations for which the residual has to be + below the maximum before the calculation is considered converged. + + Thrown if the BelowMaximumFor is set to a value less than 1. + + + + Determines the status of the iterative calculation based on the stop criteria stored + by the current . Result is set into Status field. + + The number of iterations that have passed so far. + The vector containing the current solution values. + The right hand side vector. + The vector containing the current residual vectors. + + The individual stop criteria may internally track the progress of the calculation based + on the invocation of this method. Therefore this method should only be called if the + calculation has moved forwards at least one step. + + + + + Gets the current calculation status. + + + + + Resets the to the pre-calculation state. + + + + + Clones the current and its settings. + + A new instance of the class. + + + + Loads the available objects from the specified assembly. + + The assembly which will be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The type in the assembly which should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the specified assembly. + + The of the assembly that should be searched for setup objects. + If true, types that fail to load are simply ignored. Otherwise the exception is rethrown. + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + The types that should not be loaded. + + + + Loads the available objects from the Math.NET Numerics assembly. + + + + + A unit preconditioner. This preconditioner does not actually do anything + it is only used when running an without + a preconditioner. + + + + + The coefficient matrix on which this preconditioner operates. + Is used to check dimensions on the different vectors that are processed. + + + + + Initializes the preconditioner and loads the internal data structures. + + + The matrix upon which the preconditioner is based. + + If is not a square matrix. + + + + Approximates the solution to the matrix equation Ax = b. + + The right hand side vector. + The left hand side vector. Also known as the result vector. + + + If and do not have the same size. + + + - or - + + + If the size of is different the number of rows of the coefficient matrix. + + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Evaluate the row and column at a specific data index. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Gets or sets the value at the given row and column, with range checking. + + + The row of the element. + + + The column of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + The state array will not be modified, unless it is the same instance as the target array (which is allowed). + + + + The array containing the row indices of the existing rows. Element "i" of the array gives the index of the + element in the array that is first non-zero element in a row "i". + The last value is equal to ValueCount, so that the number of non-zero entries in row "i" is always + given by RowPointers[i+i] - RowPointers[i]. This array thus has length RowCount+1. + + + + + An array containing the column indices of the non-zero values. Element "j" of the array + is the number of the column in matrix that contains the j-th value in the array. + + + + + Array that contains the non-zero elements of matrix. Values of the non-zero elements of matrix are mapped into the values + array using the row-major storage mapping described in a compressed sparse row (CSR) format. + + + + + Gets the number of non zero elements in the matrix. + + The number of non zero elements. + + + + True if the matrix storage format is dense. + + + + + True if all fields of this matrix can be set to any value. + False if some fields are fixed, like on a diagonal matrix. + + + + + True if the specified field can be set to any value. + False if the field is fixed, like an off-diagonal field on a diagonal matrix. + + + + + Retrieves the requested element without range checking. + + + The row of the element. + + + The column of the element. + + + The requested element. + + Not range-checked. + + + + Sets the element without range checking. + + The row of the element. + The column of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Delete value from internal storage + + Index of value in nonZeroValues array + Row number of matrix + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Find item Index in nonZeroValues array + + Matrix row index + Matrix column index + Item index + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Array that contains the indices of the non-zero values. + + + + + Array that contains the non-zero elements of the vector. + + + + + Gets the number of non-zero elements in the vector. + + + + + True if the vector storage format is dense. + + + + + Retrieves the requested element without range checking. + + + + + Sets the element without range checking. + + + + + Calculates the amount with which to grow the storage array's if they need to be + increased in size. + + The amount grown. + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + True if the vector storage format is dense. + + + + + Gets or sets the value at the given index, with range checking. + + + The index of the element. + + The value to get or set. + This method is ranged checked. and + to get and set values without range checking. + + + + Retrieves the requested element without range checking. + + The index of the element. + The requested element. + Not range-checked. + + + + Sets the element without range checking. + + The index of the element. + The value to set the element to. + WARNING: This method is not thread safe. Use "lock" with it and be sure to avoid deadlocks. + + + + Indicates whether the current object is equal to another object of the same type. + + + An object to compare with this object. + + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to the current . + + + true if the specified is equal to the current ; otherwise, false. + + The to compare with the current . + + + + Serves as a hash function for a particular type. + + + A hash code for the current . + + + + + Defines the generic class for Vector classes. + + Supported data types are double, single, , and . + + + + The zero value for type T. + + + + + The value of 1.0 for type T. + + + + + Negates vector and save result to + + Target vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar denominator to use. + The vector to store the result of the division. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar numerator to use. + The vector to store the result of the division. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the division. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise raise this vector to an exponent vector and store the result into the result vector. + + The exponent vector to raise this vector values to. + The vector to store the result of the pointwise power. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The result of the modulus. + + + + Pointwise applies the exponential function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Pointwise applies the natural logarithm function to each value and stores the result into the result vector. + + The vector to store the result. + + + + Adds a scalar to each element of the vector. + + The scalar to add. + A copy of the vector with the scalar added. + + + + Adds a scalar to each element of the vector and stores the result in the result vector. + + The scalar to add. + The vector to store the result of the addition. + If this vector and are not the same size. + + + + Adds another vector to this vector. + + The vector to add to this one. + A new vector containing the sum of both vectors. + If this vector and are not the same size. + + + + Adds another vector to this vector and stores the result into the result vector. + + The vector to add to this one. + The vector to store the result of the addition. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Subtracts a scalar from each element of the vector. + + The scalar to subtract. + A new vector containing the subtraction of this vector and the scalar. + + + + Subtracts a scalar from each element of the vector and stores the result in the result vector. + + The scalar to subtract. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Subtracts each element of the vector from a scalar. + + The scalar to subtract from. + A new vector containing the subtraction of the scalar and this vector. + + + + Subtracts each element of the vector from a scalar and stores the result in the result vector. + + The scalar to subtract from. + The vector to store the result of the subtraction. + If this vector and are not the same size. + + + + Returns a negated vector. + + The negated vector. + Added as an alternative to the unary negation operator. + + + + Negates vector and save result to + + Target vector + + + + Subtracts another vector from this vector. + + The vector to subtract from this one. + A new vector containing the subtraction of the two vectors. + If this vector and are not the same size. + + + + Subtracts another vector to this vector and stores the result into the result vector. + + The vector to subtract from this one. + The vector to store the result of the subtraction. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Return vector with complex conjugate values of the source vector + + Conjugated vector + + + + Complex conjugates vector and save result to + + Target vector + + + + Multiplies a scalar to each element of the vector. + + The scalar to multiply. + A new vector that is the multiplication of the vector and the scalar. + + + + Multiplies a scalar to each element of the vector and stores the result in the result vector. + + The scalar to multiply. + The vector to store the result of the multiplication. + If this vector and are not the same size. + + + + Computes the dot product between this vector and another vector. + + The other vector. + The sum of a[i]*b[i] for all i. + If is not of the same size. + + + + + Computes the dot product between the conjugate of this vector and another vector. + + The other vector. + The sum of conj(a[i])*b[i] for all i. + If is not of the same size. + If is . + + + + + Divides each element of the vector by a scalar. + + The scalar to divide with. + A new vector that is the division of the vector and the scalar. + + + + Divides each element of the vector by a scalar and stores the result in the result vector. + + The scalar to divide with. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Divides a scalar by each element of the vector. + + The scalar to divide. + A new vector that is the division of the vector and the scalar. + + + + Divides a scalar by each element of the vector and stores the result in the result vector. + + The scalar to divide. + The vector to store the result of the division. + If this vector and are not the same size. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the canonical modulus, where the result has the sign of the divisor, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector containing the result. + + + + Computes the remainder (vector % divisor), where the result has the sign of the dividend, + for each element of the vector for the given divisor. + + The scalar denominator to use. + A vector to store the results in. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector containing the result. + + + + Computes the remainder (dividend % vector), where the result has the sign of the dividend, + for the given dividend for each element of the vector. + + The scalar numerator to use. + A vector to store the results in. + + + + Pointwise multiplies this vector with another vector. + + The vector to pointwise multiply with this one. + A new vector which is the pointwise multiplication of the two vectors. + If this vector and are not the same size. + + + + Pointwise multiplies this vector with another vector and stores the result into the result vector. + + The vector to pointwise multiply with this one. + The vector to store the result of the pointwise multiplication. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector. + + The pointwise denominator vector to use. + A new vector which is the pointwise division of the two vectors. + If this vector and are not the same size. + + + + Pointwise divide this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise division. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + The matrix to store the result into. + If this vector and are not the same size. + + + + Pointwise raise this vector to an exponent and store the result into the result vector. + + The exponent to raise this vector values to. + + + + Pointwise raise this vector to an exponent. + + The exponent to raise this vector values to. + The vector to store the result into. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise canonical modulus, where the result has the sign of the divisor, + of this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise modulus. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + of this vector with another vector. + + The pointwise denominator vector to use. + If this vector and are not the same size. + + + + Pointwise remainder (% operator), where the result has the sign of the dividend, + this vector with another vector and stores the result into the result vector. + + The pointwise denominator vector to use. + The vector to store the result of the pointwise remainder. + If this vector and are not the same size. + If this vector and are not the same size. + + + + Helper function to apply a unary function to a vector. The function + f modifies the vector given to it in place. Before its + called, a copy of the 'this' vector with the same dimension is + first created, then passed to f. The copy is returned as the result + + Function which takes a vector, modifies it in place and returns void + New instance of vector which is the result + + + + Helper function to apply a unary function which modifies a vector + in place. + + Function which takes a vector, modifies it in place and returns void + The vector where the result is to be stored + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes a scalar and + a vector and modifies the latter in place. A copy of the "this" + vector is therefore first made and then passed to f together with + the scalar argument. The copy is then returned as the result + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The resulting vector + + + + Helper function to apply a binary function which takes a scalar and + a vector, modifies the latter in place and returns void. + + Function which takes a scalar and a vector, modifies the vector in place and returns void + The scalar to be passed to the function + The vector where the result will be placed + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the latter in place. A copy of the "this" vector is + first made and then passed to f together with the other vector. The + copy is then returned as the result + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Helper function to apply a binary function which takes two vectors + and modifies the second one in place + + Function which takes two vectors, modifies the second in place and returns void + The other vector to be passed to the function as argument. It is not modified + The resulting vector + If this vector and are not the same size. + + + + Pointwise applies the exponent function to each value. + + + + + Pointwise applies the exponent function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the natural logarithm function to each value. + + + + + Pointwise applies the natural logarithm function to each value. + + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the abs function to each value + + + + + Pointwise applies the abs function to each value + + The vector to store the result + + + + Pointwise applies the acos function to each value + + + + + Pointwise applies the acos function to each value + + The vector to store the result + + + + Pointwise applies the asin function to each value + + + + + Pointwise applies the asin function to each value + + The vector to store the result + + + + Pointwise applies the atan function to each value + + + + + Pointwise applies the atan function to each value + + The vector to store the result + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + + + + Pointwise applies the atan2 function to each value of the current + vector and a given other vector being the 'x' of atan2 and the + 'this' vector being the 'y' + + + The vector to store the result + + + + Pointwise applies the ceiling function to each value + + + + + Pointwise applies the ceiling function to each value + + The vector to store the result + + + + Pointwise applies the cos function to each value + + + + + Pointwise applies the cos function to each value + + The vector to store the result + + + + Pointwise applies the cosh function to each value + + + + + Pointwise applies the cosh function to each value + + The vector to store the result + + + + Pointwise applies the floor function to each value + + + + + Pointwise applies the floor function to each value + + The vector to store the result + + + + Pointwise applies the log10 function to each value + + + + + Pointwise applies the log10 function to each value + + The vector to store the result + + + + Pointwise applies the round function to each value + + + + + Pointwise applies the round function to each value + + The vector to store the result + + + + Pointwise applies the sign function to each value + + + + + Pointwise applies the sign function to each value + + The vector to store the result + + + + Pointwise applies the sin function to each value + + + + + Pointwise applies the sin function to each value + + The vector to store the result + + + + Pointwise applies the sinh function to each value + + + + + Pointwise applies the sinh function to each value + + The vector to store the result + + + + Pointwise applies the sqrt function to each value + + + + + Pointwise applies the sqrt function to each value + + The vector to store the result + + + + Pointwise applies the tan function to each value + + + + + Pointwise applies the tan function to each value + + The vector to store the result + + + + Pointwise applies the tanh function to each value + + + + + Pointwise applies the tanh function to each value + + The vector to store the result + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector. + + The other vector + + + + Computes the outer product M[i,j] = u[i]*v[j] of this and another vector and stores the result in the result matrix. + + The other vector + The matrix to store the result of the product. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute minimum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + + + + Pointwise applies the absolute maximum with a scalar to each value. + + The scalar value to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute minimum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + + + + Pointwise applies the absolute maximum with the values of another vector to each value. + + The vector with the values to compare to. + The vector to store the result. + If this vector and are not the same size. + + + + Calculates the L1 norm of the vector, also known as Manhattan norm. + + The sum of the absolute values. + + + + Calculates the L2 norm of the vector, also known as Euclidean norm. + + The square root of the sum of the squared values. + + + + Calculates the infinity norm of the vector. + + The maximum absolute value. + + + + Computes the p-Norm. + + The p value. + Scalar ret = (sum(abs(this[i])^p))^(1/p) + + + + Normalizes this vector to a unit vector with respect to the p-norm. + + The p value. + This vector normalized to a unit vector with respect to the p-norm. + + + + Returns the value of the absolute minimum element. + + The value of the absolute minimum element. + + + + Returns the index of the absolute minimum element. + + The index of absolute minimum element. + + + + Returns the value of the absolute maximum element. + + The value of the absolute maximum element. + + + + Returns the index of the absolute maximum element. + + The index of absolute maximum element. + + + + Returns the value of maximum element. + + The value of maximum element. + + + + Returns the index of the maximum element. + + The index of maximum element. + + + + Returns the value of the minimum element. + + The value of the minimum element. + + + + Returns the index of the minimum element. + + The index of minimum element. + + + + Computes the sum of the vector's elements. + + The sum of the vector's elements. + + + + Computes the sum of the absolute value of the vector's elements. + + The sum of the absolute value of the vector's elements. + + + + Indicates whether the current object is equal to another object of the same type. + + An object to compare with this object. + + true if the current object is equal to the parameter; otherwise, false. + + + + + Determines whether the specified is equal to this instance. + + The to compare with this instance. + + true if the specified is equal to this instance; otherwise, false. + + + + + Returns a hash code for this instance. + + + A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Returns an enumerator that iterates through the collection. + + + A that can be used to iterate through the collection. + + + + + Returns an enumerator that iterates through a collection. + + + An object that can be used to iterate through the collection. + + + + + Returns a string that describes the type, dimensions and shape of this vector. + + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Character to use to print if there is not enough space to print all entries. Typical value: "..". + Character to use to separate two columns on a line. Typical value: " " (2 spaces). + Character to use to separate two rows/lines. Typical value: Environment.NewLine. + Function to provide a string for any given entry value. + + + + Returns a string that represents the content of this vector, column by column. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that represents the content of this vector, column by column. + + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector, column by column and with a type header. + + Maximum number of entries and thus lines per column. Typical value: 12; Minimum: 3. + Maximum number of characters per line over all columns. Typical value: 80; Minimum: 16. + Floating point format string. Can be null. Default value: G6. + Format provider or culture. Can be null. + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + + + + + Returns a string that summarizes this vector. + The maximum number of cells can be configured in the class. + The format string is ignored. + + + + + Initializes a new instance of the Vector class. + + + + + Gets the raw vector data storage. + + + + + Gets the length or number of dimensions of this vector. + + + + Gets or sets the value at the given . + The index of the value to get or set. + The value of the vector at the given . + If is negative or + greater than the size of the vector. + + + Gets the value at the given without range checking.. + The index of the value to get or set. + The value of the vector at the given . + + + Sets the at the given without range checking.. + The index of the value to get or set. + The value to set. + + + + Resets all values to zero. + + + + + Sets all values of a subvector to zero. + + + + + Set all values whose absolute value is smaller than the threshold to zero, in-place. + + + + + Set all values that meet the predicate to zero, in-place. + + + + + Returns a deep-copy clone of the vector. + + A deep-copy clone of the vector. + + + + Set the values of this vector to the given values. + + The array containing the values to use. + If is . + If is not the same size as this vector. + + + + Copies the values of this vector into the target vector. + + The vector to copy elements into. + If is . + If is not the same size as this vector. + + + + Creates a vector containing specified elements. + + The first element to begin copying from. + The number of elements to copy. + A vector containing a copy of the specified elements. + If is not positive or + greater than or equal to the size of the vector. + If + is greater than or equal to the size of the vector. + + If is not positive. + + + + Copies the values of a given vector into a region in this vector. + + The field to start copying to + The number of fields to copy. Must be positive. + The sub-vector to copy from. + If is + + + + Copies the requested elements from this vector to another. + + The vector to copy the elements to. + The element to start copying from. + The element to start copying to. + The number of elements to copy. + + + + Returns the data contained in the vector as an array. + The returned array will be independent from this vector. + A new memory block will be allocated for the array. + + The vector's data as an array. + + + + Returns the internal array of this vector if, and only if, this vector is stored by such an array internally. + Otherwise returns null. Changes to the returned array and the vector will affect each other. + Use ToArray instead if you always need an independent array. + + + + + Create a matrix based on this vector in column form (one single column). + + + This vector as a column matrix. + + + + + Create a matrix based on this vector in row form (one single row). + + + This vector as a row matrix. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector. + + + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Returns an IEnumerable that can be used to iterate through all values of the vector and their index. + + + The enumerator returns a Tuple with the first value being the element index + and the second value being the value of the element at that index. + The enumerator will include all values, even if they are zero. + + + + + Applies a function to each value of this vector and replaces the value with its result. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value with its result. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and replaces the value in the result vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value of this vector and returns the results as a new vector. + The index of each value (zero-based) is passed as first argument to the function. + If forceMapZero is not set to true, zero values may or may not be skipped depending + on the actual data storage implementation (relevant mostly for sparse vectors). + + + + + Applies a function to each value pair of two vectors and replaces the value in the result vector. + + + + + Applies a function to each value pair of two vectors and returns the results as a new vector. + + + + + Applies a function to update the status with each value pair of two vectors and returns the resulting status. + + + + + Returns a tuple with the index and value of the first element satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a tuple with the index and values of the first element pair of two vectors of the same size satisfying a predicate, or null if none is found. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if at least one element pairs of two vectors of the same size satisfies a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all elements satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns true if all element pairs of two vectors of the same size satisfy a predicate. + Zero elements may be skipped on sparse data structures if allowed (default). + + + + + Returns a Vector containing the same values of . + + This method is included for completeness. + The vector to get the values from. + A vector containing the same values as . + If is . + + + + Returns a Vector containing the negated values of . + + The vector to get the values from. + A vector containing the negated values as . + If is . + + + + Adds two Vectors together and returns the results. + + One of the vectors to add. + The other vector to add. + The result of the addition. + If and are not the same size. + If or is . + + + + Adds a scalar to each element of a vector. + + The vector to add to. + The scalar value to add. + The result of the addition. + If is . + + + + Adds a scalar to each element of a vector. + + The scalar value to add. + The vector to add to. + The result of the addition. + If is . + + + + Subtracts two Vectors and returns the results. + + The vector to subtract from. + The vector to subtract. + The result of the subtraction. + If and are not the same size. + If or is . + + + + Subtracts a scalar from each element of a vector. + + The vector to subtract from. + The scalar value to subtract. + The result of the subtraction. + If is . + + + + Subtracts each element of a vector from a scalar. + + The scalar value to subtract from. + The vector to subtract. + The result of the subtraction. + If is . + + + + Multiplies a vector with a scalar. + + The vector to scale. + The scalar value. + The result of the multiplication. + If is . + + + + Multiplies a vector with a scalar. + + The scalar value. + The vector to scale. + The result of the multiplication. + If is . + + + + Computes the dot product between two Vectors. + + The left row vector. + The right column vector. + The dot product between the two vectors. + If and are not the same size. + If or is . + + + + Divides a scalar with a vector. + + The scalar to divide. + The vector. + The result of the division. + If is . + + + + Divides a vector with a scalar. + + The vector to divide. + The scalar value. + The result of the division. + If is . + + + + Pointwise divides two Vectors. + + The vector to divide. + The other vector. + The result of the division. + If and are not the same size. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of each element of the vector of the given divisor. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If is . + + + + Computes the remainder (% operator), where the result has the sign of the dividend, + of the given dividend of each element of the vector. + + The dividend we want to compute the remainder of. + The vector whose elements we want to use as divisor. + If is . + + + + Computes the pointwise remainder (% operator), where the result has the sign of the dividend, + of each element of two vectors. + + The vector whose elements we want to compute the remainder of. + The divisor to use. + If and are not the same size. + If is . + + + + Computes the sqrt of a vector pointwise + + The input vector + + + + + Computes the exponential of a vector pointwise + + The input vector + + + + + Computes the log of a vector pointwise + + The input vector + + + + + Computes the log10 of a vector pointwise + + The input vector + + + + + Computes the sin of a vector pointwise + + The input vector + + + + + Computes the cos of a vector pointwise + + The input vector + + + + + Computes the tan of a vector pointwise + + The input vector + + + + + Computes the asin of a vector pointwise + + The input vector + + + + + Computes the acos of a vector pointwise + + The input vector + + + + + Computes the atan of a vector pointwise + + The input vector + + + + + Computes the sinh of a vector pointwise + + The input vector + + + + + Computes the cosh of a vector pointwise + + The input vector + + + + + Computes the tanh of a vector pointwise + + The input vector + + + + + Computes the absolute value of a vector pointwise + + The input vector + + + + + Computes the floor of a vector pointwise + + The input vector + + + + + Computes the ceiling of a vector pointwise + + The input vector + + + + + Computes the rounded value of a vector pointwise + + The input vector + + + + + Converts a vector to single precision. + + + + + Converts a vector to double precision. + + + + + Converts a vector to single precision complex numbers. + + + + + Converts a vector to double precision complex numbers. + + + + + Gets a single precision complex vector with the real parts from the given vector. + + + + + Gets a double precision complex vector with the real parts from the given vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the real parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Gets a real vector representing the imaginary parts of a complex vector. + + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response vector Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + + Predictor matrix X + Response matrix Y + The direct method to be used to compute the regression. + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + The direct method to be used to compute the regression. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses the cholesky decomposition of the normal equations. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response vector Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Predictor matrix X + Response matrix Y + Best fitting vector for model parameters β + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + List of predictor-arrays. + List of responses + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals. + Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower. + + Sequence of predictor-arrays and their response. + True if an intercept should be added as first artificial predictor value. Default = false. + Best fitting list of model parameters β for each element in the predictor-arrays. + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> a+b*x, + returning its best fitting parameters as (a, b) tuple, + where a is the intercept and b the slope. + + Predictor-Response samples as tuples + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor (independent) + Response (dependent) + + + + Least-Squares fitting the points (x,y) to a line y : x -> b*x, + returning its best fitting parameter b, + where the intercept is zero and b the slope. + + Predictor-Response samples as tuples + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response matrix Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + + + + Weighted Linear Regression using normal equations. + + Predictor matrix X + Response vector Y + Weight matrix W, usually diagonal with an entry for each predictor (row). + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Weighted Linear Regression using normal equations. + + List of sample vectors (predictor) together with their response. + List of weights, one for each sample. + True if an intercept should be added as first artificial predictor value. Default = false. + + + + Locally-Weighted Linear Regression using normal equations. + + + + + Locally-Weighted Linear Regression using normal equations. + + + + + First Order AB method(same as Forward Euler) + + Initial value + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Second Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Third Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + Fourth Order AB Method + + Initial value 1 + Start Time + End Time + Size of output array(the larger, the finer) + ode model + approximation with size N + + + + ODE Solver Algorithms + + + + + Second Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta method + + initial value + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Second Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Fourth Order Runge-Kutta to solve ODE SYSTEM + + initial vector + start time + end time + Size of output array(the larger, the finer) + ode function + approximations + + + + Broyden–Fletcher–Goldfarb–Shanno Bounded (BFGS-B) algorithm is an iterative method for solving box-constrained nonlinear optimization problems + http://www.ece.northwestern.edu/~nocedal/PSfiles/limited.ps.gz + + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The lower bound + The upper bound + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems + + + + + Creates BFGS minimizer + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + + Creates a base class for BFGS minimization + + + + + Broyden-Fletcher-Goldfarb-Shanno solver for finding function minima + See http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm + Inspired by implementation: https://github.com/PatWie/CppNumericalSolvers/blob/master/src/BfgsSolver.cpp + + + + + Finds a minimum of a function by the BFGS quasi-Newton method + This uses the function and it's gradient (partial derivatives in each direction) and approximates the Hessian + + An initial guess + Evaluates the function at a point + Evaluates the gradient of the function at a point + The minimum found + + + + Objective function with a frozen evaluation that must not be changed from the outside. + + + + Create a new unevaluated and independent copy of this objective function + + + + Objective function with a mutable evaluation. + + + + Create a new independent copy of this objective function, evaluated at the same point. + + + + Get the y-values of the observations. + + + + + Get the values of the weights for the observations. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the values of the parameters. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector. G = J'(y - f(x; p)) + + + + + Get the approximated Hessian matrix. H = J'J + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Get the degree of freedom. + + + + + The scale factor for initial mu + + + + + Non-linear least square fitting by the Levenberg-Marduardt algorithm. + + The objective function, including model, observations, and parameter bounds. + The initial guess values. + The initial damping parameter of mu. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for L2 norm of the residuals. + The max iterations. + The result of the Levenberg-Marquardt minimization + + + + Limited Memory version of Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm + + + + + + Creates L-BFGS minimizer + + Numbers of gradients and steps to store. + + + + Find the minimum of the objective function given lower and upper bounds + + The objective function, must support a gradient + The initial guess + The MinimizationResult which contains the minimum and the ExitCondition + + + + Search for a step size alpha that satisfies the weak Wolfe conditions. The weak Wolfe + Conditions are + i) Armijo Rule: f(x_k + alpha_k p_k) <= f(x_k) + c1 alpha_k p_k^T g(x_k) + ii) Curvature Condition: p_k^T g(x_k + alpha_k p_k) >= c2 p_k^T g(x_k) + where g(x) is the gradient of f(x), 0 < c1 < c2 < 1. + + Implementation is based on http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + references: + http://en.wikipedia.org/wiki/Wolfe_conditions + http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + + + + Implemented following http://www.math.washington.edu/~burke/crs/408/lectures/L9-weak-Wolfe.pdf + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + + + + The objective function being optimized, evaluated at the starting point of the search + Search direction + Initial size of the step in the search direction + The upper bound + + + + Creates a base class for minimization + + The gradient tolerance + The parameter tolerance + The function progress tolerance + The maximum number of iterations + + + + Class implementing the Nelder-Mead simplex algorithm, used to find a minima when no gradient is available. + Called fminsearch() in Matlab. A description of the algorithm can be found at + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + or + https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method + + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Finds the minimum of the objective function without an initial perturbation, the default values used + by fminsearch() in Matlab are used instead + http://se.mathworks.com/help/matlab/math/optimizing-nonlinear-functions.html#bsgpq6p-11 + + The objective function, no gradient or hessian needed + The initial guess + The minimum point + + + + Finds the minimum of the objective function with an initial perturbation + + The objective function, no gradient or hessian needed + The initial guess + The initial perturbation + The minimum point + + + + Evaluate the objective function at each vertex to create a corresponding + list of error values for each vertex + + + + + + + + Check whether the points in the error profile have so little range that we + consider ourselves to have converged + + + + + + + + + Examine all error values to determine the ErrorProfile + + + + + + + Construct an initial simplex, given starting guesses for the constants, and + initial step sizes for each dimension + + + + + + + Test a scaling operation of the high point, and replace it if it is an improvement + + + + + + + + + + + Contract the simplex uniformly around the lowest point + + + + + + + + + Compute the centroid of all points except the worst + + + + + + + + The value of the constant + + + + + Returns the best fit parameters. + + + + + Returns the standard errors of the corresponding parameters + + + + + Returns the y-values of the fitted model that correspond to the independent values. + + + + + Returns the covariance matrix at minimizing point. + + + + + Returns the correlation matrix at minimizing point. + + + + + The stopping threshold for the function value or L2 norm of the residuals. + + + + + The stopping threshold for L2 norm of the change of the parameters. + + + + + The stopping threshold for infinity norm of the gradient. + + + + + The maximum number of iterations. + + + + + The lower bound of the parameters. + + + + + The upper bound of the parameters. + + + + + The scale factors for the parameters. + + + + + Objective function where neither Gradient nor Hessian is available. + + + + + Objective function where the Gradient is available. Greedy evaluation. + + + + + Objective function where the Gradient is available. Lazy evaluation. + + + + + Objective function where the Hessian is available. Greedy evaluation. + + + + + Objective function where the Hessian is available. Lazy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Greedy evaluation. + + + + + Objective function where both Gradient and Hessian are available. Lazy evaluation. + + + + + Objective function where neither first nor second derivative is available. + + + + + Objective function where the first derivative is available. + + + + + Objective function where the first and second derivatives are available. + + + + + objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective model with a user supplied jacobian for non-linear least squares regression. + + + + + Objective model for non-linear least squares regression. + + + + + Objective function with a user supplied jacobian for nonlinear least squares regression. + + + + + Objective function for nonlinear least squares regression. + The numerical jacobian with accuracy order is used. + + + + + Adapts an objective function with only value implemented + to provide a gradient as well. Gradient calculation is + done using the finite difference method, specifically + forward differences. + + For each gradient computed, the algorithm requires an + additional number of function evaluations equal to the + functions's number of input parameters. + + + + + Set or get the values of the independent variable. + + + + + Set or get the values of the observations. + + + + + Set or get the values of the weights for the observations. + + + + + Get whether parameters are fixed or free. + + + + + Get the number of observations. + + + + + Get the number of unknown parameters. + + + + + Get the degree of freedom + + + + + Get the number of calls to function. + + + + + Get the number of calls to jacobian. + + + + + Set or get the values of the parameters. + + + + + Get the y-values of the fitted model that correspond to the independent values. + + + + + Get the residual sum of squares. + + + + + Get the Gradient vector of x and p. + + + + + Get the Hessian matrix of x and p, J'WJ + + + + + Set observed data to fit. + + + + + Set parameters and bounds. + + The initial values of parameters. + The list to the parameters fix or free. + + + + Non-linear least square fitting by the trust region dogleg algorithm. + + + + + The trust region subproblem. + + + + + The stopping threshold for the trust region radius. + + + + + Non-linear least square fitting by the trust-region algorithm. + + The objective model, including function, jacobian, observations, and parameter bounds. + The subproblem + The initial guess values. + The stopping threshold for L2 norm of the residuals. + The stopping threshold for infinity norm of the gradient vector. + The stopping threshold for L2 norm of the change of parameters. + The stopping threshold for trust region radius + The max iterations. + + + + + Non-linear least square fitting by the trust region Newton-Conjugate-Gradient algorithm. + + + + + Class to represent a permutation for a subset of the natural numbers. + + + + + Entry _indices[i] represents the location to which i is permuted to. + + + + + Initializes a new instance of the Permutation class. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + + + + Gets the number of elements this permutation is over. + + + + + Computes where permutes too. + + The index to permute from. + The index which is permuted to. + + + + Computes the inverse of the permutation. + + The inverse of the permutation. + + + + Construct an array from a sequence of inversions. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + The set of inversions to construct the permutation from. + A permutation generated from a sequence of inversions. + + + + Construct a sequence of inversions from the permutation. + + + From wikipedia: the permutation 12043 has the inversions (0,2), (1,2) and (3,4). This would be + encoded using the array [22244]. + + A sequence of inversions. + + + + Checks whether the array represents a proper permutation. + + An array which represents where each integer is permuted too: indices[i] represents that integer i + is permuted to location indices[i]. + True if represents a proper permutation, false otherwise. + + + + A single-variable polynomial with real-valued coefficients and non-negative exponents. + + + + + The coefficients of the polynomial in a + + + + + Only needed for the ToString method + + + + + Degree of the polynomial, i.e. the largest monomial exponent. For example, the degree of y=x^2+x^5 is 5, for y=3 it is 0. + The null-polynomial returns degree -1 because the correct degree, negative infinity, cannot be represented by integers. + + + + + Create a zero-polynomial with a coefficient array of the given length. + An array of length N can support polynomials of a degree of at most N-1. + + Length of the coefficient array + + + + Create a zero-polynomial + + + + + Create a constant polynomial. + Example: 3.0 -> "p : x -> 3.0" + + The coefficient of the "x^0" monomial. + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as array + + + + Create a polynomial with the provided coefficients (in ascending order, where the index matches the exponent). + Example: {5, 0, 2} -> "p : x -> 5 + 0 x^1 + 2 x^2". + + Polynomial coefficients as enumerable + + + + Least-Squares fitting the points (x,y) to a k-order polynomial y : x -> p0 + p1*x + p2*x^2 + ... + pk*x^k + + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered ascending by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at point x. + + The location where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Evaluate a polynomial at points z. + + The locations where to evaluate the polynomial at. + + + + Calculates the complex roots of the Polynomial by eigenvalue decomposition + + a vector of complex numbers with the roots + + + + Get the eigenvalue matrix A of this polynomial such that eig(A) = roots of this polynomial. + + Eigenvalue matrix A + This matrix is similar to the companion matrix of this polynomial, in such a way, that it's transpose is the columnflip of the companion matrix + + + + Addition of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a polynomial and a scalar. + + + + + Subtraction of two Polynomials (point-wise). + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Addition of a scalar from a polynomial. + + + + + Addition of a polynomial from a scalar. + + + + + Negation of a polynomial. + + + + + Multiplies a polynomial by a polynomial (convolution) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Scales a polynomial by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Scales a polynomial by division by a scalar + + Polynomial + Scalar value + Resulting Polynomial + + + + Euclidean long division of two polynomials, returning the quotient q and remainder r of the two polynomials a and b such that a = q*b + r + + Left polynomial + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Point-wise division of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Point-wise multiplication of two Polynomials + + Left Polynomial + Right Polynomial + Resulting Polynomial + + + + Division of two polynomials returning the quotient-with-remainder of the two polynomials given + + Right polynomial + A tuple holding quotient in first and remainder in second + + + + Addition of two Polynomials (piecewise) + + Left polynomial + Right polynomial + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + adds a scalar to a polynomial. + + Scalar value + Polynomial + Resulting Polynomial + + + + Subtraction of two polynomial. + + Left polynomial + Right polynomial + Resulting Polynomial + + + + Subtracts a scalar from a polynomial. + + Polynomial + Scalar value + Resulting Polynomial + + + + Subtracts a polynomial from a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Negates a polynomial. + + Polynomial + Resulting Polynomial + + + + Multiplies a polynomial by a polynomial (convolution). + + Left polynomial + Right polynomial + resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Polynomial + Scalar value + Resulting Polynomial + + + + Multiplies a polynomial by a scalar. + + Scalar value + Polynomial + Resulting Polynomial + + + + Divides a polynomial by scalar value. + + Polynomial + Scalar value + Resulting Polynomial + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Format the polynomial in ascending order, e.g. "4.3 + 2.0x^2 - x^3". + + + + + Format the polynomial in descending order, e.g. "x^3 + 2.0x^2 - 4.3". + + + + + Creates a new object that is a copy of the current instance. + + + A new object that is a copy of this instance. + + + + + Utilities for working with floating point numbers. + + + + Useful links: + + + http://docs.sun.com/source/806-3568/ncg_goldberg.html#689 - What every computer scientist should know about floating-point arithmetic + + + http://en.wikipedia.org/wiki/Machine_epsilon - Gives the definition of machine epsilon + + + + + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The relative accuracy required for being almost equal. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The number of decimal places on which the values must be compared. Must be 1 or larger. + + + + Compares two doubles and determines which double is bigger. + a < b -> -1; a ~= b (almost equal according to parameter) -> 0; a > b -> +1. + + The first value. + The second value. + The maximum error in terms of Units in Last Place (ulps), i.e. the maximum number of decimals that may be different. Must be 1 or larger. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is larger than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is larger than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of thg. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The number of decimal places. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the specified number of decimal places or not. + + The first value. + The second value. + The relative accuracy required for being almost equal. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Compares two doubles and determines if the first value is smaller than the second + value to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values for which the two values are considered equal. Must be 1 or larger. + true if the first value is smaller than the second value; otherwise false. + + + + Checks if a given double values is finite, i.e. neither NaN nor inifnity + + The value to be checked fo finitenes. + + + + The number of binary digits used to represent the binary number for a double precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + The number of binary digits used to represent the binary number for a single precision floating + point value. i.e. there are this many digits used to represent the + actual number, where in a number as: 0.134556 * 10^5 the digits are 0.134556 and the exponent is 5. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 double-precision floating numbers (64 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Demmel and used in LAPACK and Scilab. + + + + + Standard epsilon, the maximum relative precision of IEEE 754 single-precision floating numbers (32 bit). + According to the definition of Prof. Higham and used in the ISO C standard and MATLAB. + + + + + Actual double precision machine epsilon, the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + On a standard machine this is equivalent to `DoublePrecision`. + + + + + Actual double precision machine epsilon, the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + On a standard machine this is equivalent to `PositiveDoublePrecision`. + + + + + The number of significant decimal places of double-precision floating numbers (64 bit). + + + + + The number of significant decimal places of single-precision floating numbers (32 bit). + + + + + Value representing 10 * 2^(-53) = 1.11022302462516E-15 + + + + + Value representing 10 * 2^(-24) = 5.96046447753906E-07 + + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the magnitude of the number. + + The value. + The magnitude of the number. + + + + Returns the number divided by it's magnitude, effectively returning a number between -10 and 10. + + The value. + The value of the number. + + + + Returns a 'directional' long value. This is a long value which acts the same as a double, + e.g. a negative double value will return a negative double value starting at 0 and going + more negative as the double value gets more negative. + + The input double value. + A long value which is roughly the equivalent of the double value. + + + + Returns a 'directional' int value. This is a int value which acts the same as a float, + e.g. a negative float value will return a negative int value starting at 0 and going + more negative as the float value gets more negative. + + The input float value. + An int value which is roughly the equivalent of the double value. + + + + Increments a floating point number to the next bigger number representable by the data type. + + The value which needs to be incremented. + How many times the number should be incremented. + + The incrementation step length depends on the provided value. + Increment(double.MaxValue) will return positive infinity. + + The next larger floating point value. + + + + Decrements a floating point number to the next smaller number representable by the data type. + + The value which should be decremented. + How many times the number should be decremented. + + The decrementation step length depends on the provided value. + Decrement(double.MinValue) will return negative infinity. + + The next smaller floating point value. + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The maximum count of numbers between the zero and the number . + + Zero if || is fewer than numbers from zero, otherwise. + + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero, according to the specified absolute accuracy. + + The real number to coerce to zero, if it is almost zero. + The absolute threshold for to consider it as zero. + Zero if || is smaller than , otherwise. + + Thrown if is smaller than zero. + + + + + Forces small numbers near zero to zero. + + The real number to coerce to zero, if it is almost zero. + Zero if || is smaller than 2^(-53) = 1.11e-16, otherwise. + + + + Determines the range of floating point numbers that will match the specified value with the given tolerance. + + The value. + The ulps difference. + + Thrown if is smaller than zero. + + Tuple of the bottom and top range ends. + + + + Returns the floating point number that will match the value with the tolerance on the maximum size (i.e. the result is + always bigger than the value) + + The value. + The ulps difference. + The maximum floating point number which is larger than the given . + + + + Returns the floating point number that will match the value with the tolerance on the minimum size (i.e. the result is + always smaller than the value) + + The value. + The ulps difference. + The minimum floating point number which is smaller than the given . + + + + Determines the range of ulps that will match the specified value with the given tolerance. + + The value. + The relative difference. + + Thrown if is smaller than zero. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Tuple with the number of ULPS between the value and the value - relativeDifference as first, + and the number of ULPS between the value and the value + relativeDifference as second value. + + + + + Evaluates the count of numbers between two double numbers + + The first parameter. + The second parameter. + The second number is included in the number, thus two equal numbers evaluate to zero and two neighbor numbers evaluate to one. Therefore, what is returned is actually the count of numbers between plus 1. + The number of floating point values between and . + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + Thrown if is double.PositiveInfinity or double.NegativeInfinity. + + + Thrown if is double.NaN. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive double or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + + Relative Epsilon (positive float or NaN). + + Evaluates the negative epsilon. The more common positive epsilon is equal to two times this negative epsilon. + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive double or NaN) + Evaluates the positive epsilon. See also + + + + + Evaluates the minimum distance to the next distinguishable number near the argument value. + + The value used to determine the minimum distance. + Relative Epsilon (positive float or NaN) + Evaluates the positive epsilon. See also + + + + + Calculates the actual (negative) double precision machine epsilon - the smallest number that can be subtracted from 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Demmel. + + Positive Machine epsilon + + + + Calculates the actual positive double precision machine epsilon - the smallest number that can be added to 1, yielding a results different than 1. + This is also known as unit roundoff error. According to the definition of Prof. Higham. + + Machine epsilon + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum absolute error. + + The first value. + The second value. + The absolute accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum absolute error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal + within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + True if both doubles are almost equal up to the specified maximum error, false otherwise. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two doubles and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two complex and determines if they are equal within + the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two real numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Checks whether two Complex numbers are almost equal. + + The first number + The second number + true if the two values differ by no more than 10 * 2^(-52); false otherwise. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + + + The values are equal if the difference between the two numbers is smaller than 0.5e-decimalPlaces. We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The norm of the first value (can be negative). + The norm of the second value (can be negative). + The norm of the difference of the two values (can be negative). + The number of decimal places. + Thrown if is smaller than zero. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + + + The values are equal if the difference between the two numbers is smaller than 10^(-numberOfDecimalPlaces). We divide by + two so that we have half the range on each side of the numbers, e.g. if == 2, then 0.01 will equal between + 0.005 and 0.015, but not 0.02 and not 0.00 + + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not, using the + number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the specified number of decimal places or not. If the numbers + are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two doubles and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + + + Determines the 'number' of floating point numbers between two values (i.e. the number of discrete steps + between the two numbers) and then checks if that is within the specified tolerance. So if a tolerance + of 1 is passed then the result will be true only if the two numbers have the same binary representation + OR if they are two adjacent numbers that only differ by one step. + + + The comparison method used is explained in http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm . The article + at http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx explains how to transform the C code to + .NET enabled code without using pointers and unsafe code. + + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two floats and determines if they are equal to within the tolerance or not. Equality comparison is based on the binary representation. + + The first value. + The second value. + The maximum number of floating point values between the two values. Must be 1 or larger. + Thrown if is smaller than one. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The number of decimal places. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two lists of doubles and determines if they are equal within the + specified maximum error. + + The first value list. + The second value list. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two vectors and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two vectors and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal within the specified maximum error. + + The first value. + The second value. + The accuracy required for being almost equal. + + + + Compares two matrices and determines if they are equal to within the specified number + of decimal places or not, using the number of decimal places as an absolute measure. + + The first value. + The second value. + The number of decimal places. + + + + Compares two matrices and determines if they are equal to within the specified number of decimal places or not. + If the numbers are very close to zero an absolute difference is compared, otherwise the relative difference is compared. + + The first value. + The second value. + The number of decimal places. + + + + Support Interface for Precision Operations (like AlmostEquals). + + Type of the implementing class. + + + + Returns a Norm of a value of this type, which is appropriate for measuring how + close this value is to zero. + + A norm of this value. + + + + Returns a Norm of the difference of two values of this type, which is + appropriate for measuring how close together these two values are. + + The value to compare with. + A norm of the difference between this and the other value. + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + Revision + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + Frees the memory allocated to the MKL memory pool. + + + + + Frees the memory allocated to the MKL memory pool on the current thread. + + + + + Disable the MKL memory pool. May impact performance. + + + + + Retrieves information about the MKL memory pool. + + On output, returns the number of memory buffers allocated. + Returns the number of bytes allocated to all memory buffers. + + + + Enable gathering of peak memory statistics of the MKL memory pool. + + + + + Disable gathering of peak memory statistics of the MKL memory pool. + + + + + Measures peak memory usage of the MKL memory pool. + + Whether the usage counter should be reset. + The peak number of bytes allocated to all memory buffers. + + + + Disable gathering memory usage + + + + + Enable gathering memory usage + + + + + Return peak memory usage + + + + + Return peak memory usage and reset counter + + + + + Consistency vs. performance trade-off between runs on different machines. + + + + Consistent on the same CPU only (maximum performance) + + + Consistent on Intel and compatible CPUs with SSE2 support (maximum compatibility) + + + Consistent on Intel CPUs supporting SSE2 or later + + + Consistent on Intel CPUs supporting SSE4.2 or later + + + Consistent on Intel CPUs supporting AVX or later + + + Consistent on Intel CPUs supporting AVX2 or later + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + + Helper class to load native libraries depending on the architecture of the OS and process. + + + + + Dictionary of handles to previously loaded libraries, + + + + + Gets a string indicating the architecture and bitness of the current process. + + + + + If the last native library failed to load then gets the corresponding exception + which occurred or null if the library was successfully loaded. + + + + + Load the native library with the given filename. + + The file name of the library to load. + Hint path where to look for the native binaries. Can be null. + True if the library was successfully loaded or if it has already been loaded. + + + + Try to load a native library by providing its name and a directory. + Tries to load an implementation suitable for the current CPU architecture + and process mode if there is a matching subfolder. + + True if the library was successfully loaded or if it has already been loaded. + + + + Try to load a native library by providing the full path including the file name of the library. + + True if the library was successfully loaded or if it has already been loaded. + + + Revision + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + This method is safe to call, even if the provider is not loaded. + + + + + P/Invoke methods to the native math libraries. + + + + + Name of the native DLL. + + + + + Gets or sets the Fourier transform provider. Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsFFTProviderPath` or the default probing paths. + + + + + Try to use a native provider, if available. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsFFTProvider" environment variable, + or fall back to the best provider. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Sequences with length greater than Math.Sqrt(Int32.MaxValue) + 1 + will cause k*k in the Bluestein sequence to overflow (GH-286). + + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Generate the bluestein sequence for the provided problem size. + + Number of samples. + Bluestein sequence exp(I*Pi*k^2/N) + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Convolution with the bluestein sequence (Parallel Version). + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Swap the real and imaginary parts of each sample. + + Sample Vector. + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Bluestein generic FFT for arbitrary sized sample vectors. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Fully rescale the FFT result. + + Sample Vector. + + + + Half rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Fully rescale the FFT result (e.g. for symmetric transforms). + + Sample Vector. + + + + Radix-2 Reorder Helper Method + + Sample type + Sample vector + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 Step Helper Method + + Sample vector. + Fourier series exponent sign. + Level Group Size. + Index inside of the level. + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sized sample vectors. + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + + Radix-2 generic FFT for power-of-two sample vectors (Parallel Version). + + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + NVidia's CUDA Toolkit linear algebra provider. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If calling this method fails, consider to fall back to alternatives like the managed provider. + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + How to transpose a matrix. + + + + + Don't transpose a matrix. + + + + + Transpose a matrix. + + + + + Conjugate transpose a complex matrix. + + If a conjugate transpose is used with a real matrix, then the matrix is just transposed. + + + + Types of matrix norms. + + + + + The 1-norm. + + + + + The Frobenius norm. + + + + + The infinity norm. + + + + + The largest absolute value norm. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Interface to linear algebra algorithms that work off 1-D arrays. + + Supported data types are Double, Single, Complex, and Complex32. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiply elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the full QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by QR factor. This is only used for the managed provider and can be + null for the native provider. The native provider uses the Q portion stored in the R matrix. + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + On entry the B matrix; on exit the X matrix. + The number of columns of B. + On exit, the solution matrix. + Rows must be greater or equal to columns. + The type of QR factorization to perform. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Gets or sets the linear algebra provider. + Consider to use UseNativeMKL or UseManaged instead. + + The linear algebra provider. + + + + Optional path to try to load native provider binaries from. + If not set, Numerics will fall back to the environment variable + `MathNetNumericsLAProviderPath` or the default probing paths. + + + + + Try to use a native provider, if available. + + + + + Use the best provider available. + + + + + Use a specific provider if configured, e.g. using the + "MathNetNumericsLAProvider" environment variable, + or fall back to the best provider. + + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Cache-Oblivious Matrix Multiplication + + if set to true transpose matrix A. + if set to true transpose matrix B. + The value to scale the matrix A with. + The matrix A. + Row-shift of the left matrix + Column-shift of the left matrix + The matrix B. + Row-shift of the right matrix + Column-shift of the right matrix + The matrix C. + Row-shift of the result matrix + Column-shift of the result matrix + The number of rows of matrix op(A) and of the matrix C. + The number of columns of matrix op(B) and of the matrix C. + The number of columns of matrix op(A) and the rows of the matrix op(B). + The constant number of rows of matrix op(A) and of the matrix C. + The constant number of columns of matrix op(B) and of the matrix C. + The constant number of columns of matrix op(A) and the rows of the matrix op(B). + Indicates if this is the first recursion. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + The managed linear algebra provider. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + The B matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + The requested of the matrix. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Reduces a complex Hermitian matrix to a real symmetric tridiagonal matrix using unitary similarity transformations. + + Source matrix to reduce + Output: Arrays for internal storage of real parts of eigenvalues + Output: Arrays for internal storage of imaginary parts of eigenvalues + Output: Arrays that contains further information about the transformations. + Order of initial matrix + This is derived from the Algol procedures HTRIDI by + Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Determines eigenvectors by undoing the symmetric tridiagonalize transformation + + Data array of matrix V (eigenvectors) + Previously tridiagonalized matrix by SymmetricTridiagonalize. + Contains further information about the transformations + Input matrix order + This is derived from the Algol procedures HTRIBK, by + by Smith, Boyle, Dongarra, Garbow, Ikebe, Klema, Moler, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of the eigenvectors + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Assumes that and have already been transposed. + + + + + Assumes that and have already been transposed. + + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Conjugates an array. Can be used to conjugate a vector and a matrix. + + The values to conjugate. + This result of the conjugation. + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows. + The number of columns. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Calculate Cholesky step + + Factor matrix + Number of rows + Column start + Total columns + Multipliers calculated previously + Number of available processors + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. Has to be different than . + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The column to solve for. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Perform calculation of Q or R + + Work array + Index of column in work array + Q or R matrices + The first row in + The last row + The first column + The last column + Number of available CPUs + + + + Generate column from initial matrix to work array + + Work array + Initial matrix + The number of rows in matrix + The first row + Column index + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Given the Cartesian coordinates (da, db) of a point p, these function return the parameters da, db, c, and s + associated with the Givens rotation that zeros the y-coordinate of the point. + + Provides the x-coordinate of the point p. On exit contains the parameter r associated with the Givens rotation + Provides the y-coordinate of the point p. On exit contains the parameter z associated with the Givens rotation + Contains the parameter c associated with the Givens rotation + Contains the parameter s associated with the Givens rotation + This is equivalent to the DROTG LAPACK routine. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Solves A*X=B for X using a previously SVD decomposed matrix. + + The number of rows in the A matrix. + The number of columns in the A matrix. + The s values returned by . + The left singular vectors returned by . + The right singular vectors returned by . + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Symmetric Householder reduction to tridiagonal form. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tred2 by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + Symmetric tridiagonal QL algorithm. + + Data array of matrix V (eigenvectors) + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedures tql2, by + Bowdler, Martin, Reinsch, and Wilkinson, Handbook for + Auto. Comp., Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Nonsymmetric reduction to Hessenberg form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Order of initial matrix + This is derived from the Algol procedures orthes and ortran, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutines in EISPACK. + + + + Nonsymmetric reduction from Hessenberg to real Schur form. + + Data array of matrix V (eigenvectors) + Array for internal storage of nonsymmetric Hessenberg form. + Arrays for internal storage of real parts of eigenvalues + Arrays for internal storage of imaginary parts of eigenvalues + Order of initial matrix + This is derived from the Algol procedure hqr2, + by Martin and Wilkinson, Handbook for Auto. Comp., + Vol.ii-Linear Algebra, and the corresponding + Fortran subroutine in EISPACK. + + + + + Complex scalar division X/Y. + + Real part of X + Imaginary part of X + Real part of Y + Imaginary part of Y + Division result as a number. + + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + Intel's Math Kernel Library (MKL) linear algebra provider. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + Hint path where to look for the native binaries + + Sets the desired bit consistency on repeated identical computations on varying CPU architectures, + as a trade-off with performance. + + VML optimal precision and rounding. + VML accuracy mode. + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If calling this method fails, consider to fall back to alternatives like the managed provider. + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Does a point wise add of two arrays z = x + y. This can be used + to add vectors or matrices. + + The array x. + The array y. + The result of the addition. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise subtraction of two arrays z = x - y. This can be used + to subtract vectors or matrices. + + The array x. + The array y. + The result of the subtraction. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise multiplication of two arrays z = x * y. This can be used + to multiple elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise multiplication. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise division of two arrays z = x / y. This can be used + to divide elements of vectors or matrices. + + The array x. + The array y. + The result of the point wise division. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Does a point wise power of two arrays z = x ^ y. This can be used + to raise elements of vectors or matrices to the powers of another vector or matrix. + + The array x. + The array y. + The result of the point wise power. + There is no equivalent BLAS routine, but many libraries + provide optimized (parallel and/or vectorized) versions of this + routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Error codes return from the MKL provider. + + + + + Unable to allocate memory. + + + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + OpenBLAS linear algebra provider. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex.One and beta set to Complex.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to Complex32.One and beta set to Complex32.Zero, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always Complex32.One + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + Hint path where to look for the native binaries + + + + Try to find out whether the provider is available, at least in principle. + Verification may still fail if available, but it will certainly fail if unavailable. + + + + + Initialize and verify that the provided is indeed available. + If not, fall back to alternatives like the managed provider + + + + + Frees memory buffers, caches and handles allocated in or to the provider. + Does not unload the provider itself, it is still usable afterwards. + + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0 and beta set to 0.0, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0 + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Computes the requested of the matrix. + + The type of norm to compute. + The number of rows in the matrix. + The number of columns in the matrix. + The matrix to compute the norm from. + + The requested of the matrix. + + + + + Computes the dot product of x and y. + + The vector x. + The vector y. + The dot product of x and y. + This is equivalent to the DOT BLAS routine. + + + + Adds a scaled vector to another: result = y + alpha*x. + + The vector to update. + The value to scale by. + The vector to add to . + The result of the addition. + This is similar to the AXPY BLAS routine. + + + + Scales an array. Can be used to scale a vector and a matrix. + + The scalar. + The values to scale. + This result of the scaling. + This is similar to the SCAL BLAS routine. + + + + Multiples two matrices. result = x * y + + The x matrix. + The number of rows in the x matrix. + The number of columns in the x matrix. + The y matrix. + The number of rows in the y matrix. + The number of columns in the y matrix. + Where to store the result of the multiplication. + This is a simplified version of the BLAS GEMM routine with alpha + set to 1.0f and beta set to 0.0f, and x and y are not transposed. + + + + Multiplies two matrices and updates another with the result. c = alpha*op(a)*op(b) + beta*c + + How to transpose the matrix. + How to transpose the matrix. + The value to scale matrix. + The a matrix. + The number of rows in the matrix. + The number of columns in the matrix. + The b matrix + The number of rows in the matrix. + The number of columns in the matrix. + The value to scale the matrix. + The c matrix. + + + + Computes the LUP factorization of A. P*A = L*U. + + An by matrix. The matrix is overwritten with the + the LU factorization on exit. The lower triangular factor L is stored in under the diagonal of (the diagonal is always 1.0f + for the L factor). The upper triangular factor U is stored on and above the diagonal of . + The order of the square matrix . + On exit, it contains the pivot indices. The size of the array must be . + This is equivalent to the GETRF LAPACK routine. + + + + Computes the inverse of matrix using LU factorization. + + The N by N matrix to invert. Contains the inverse On exit. + The order of the square matrix . + This is equivalent to the GETRF and GETRI LAPACK routines. + + + + Computes the inverse of a previously factored matrix. + + The LU factored N by N matrix. Contains the inverse On exit. + The order of the square matrix . + The pivot indices of . + This is equivalent to the GETRI LAPACK routine. + + + + Solves A*X=B for X using LU factorization. + + The number of columns of B. + The square matrix A. + The order of the square matrix . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRF and GETRS LAPACK routines. + + + + Solves A*X=B for X using a previously factored A matrix. + + The number of columns of B. + The factored A matrix. + The order of the square matrix . + The pivot indices of . + On entry the B matrix; on exit the X matrix. + This is equivalent to the GETRS LAPACK routine. + + + + Computes the Cholesky factorization of A. + + On entry, a square, positive definite matrix. On exit, the matrix is overwritten with the + the Cholesky factorization. + The number of rows or columns in the matrix. + This is equivalent to the POTRF LAPACK routine. + + + + Solves A*X=B for X using Cholesky factorization. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRF add POTRS LAPACK routines. + + + + + Solves A*X=B for X using a previously factored A matrix. + + The square, positive definite matrix A. + The number of rows and columns in A. + On entry the B matrix; on exit the X matrix. + The number of columns in the B matrix. + This is equivalent to the POTRS LAPACK routine. + + + + Computes the QR factorization of A. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the R matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A M by M matrix that holds the Q matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Computes the thin QR factorization of A where M > N. + + On entry, it is the M by N A matrix to factor. On exit, + it is overwritten with the Q matrix of the QR factorization. + The number of rows in the A matrix. + The number of columns in the A matrix. + On exit, A N by N matrix that holds the R matrix of the + QR factorization. + A min(m,n) vector. On exit, contains additional information + to be used by the QR solve routine. + This is similar to the GEQRF and ORGQR LAPACK routines. + + + + Solves A*X=B for X using QR factorization of A. + + The A matrix. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using a previously QR factored matrix. + + The Q matrix obtained by calling . + The R matrix obtained by calling . + The number of rows in the A matrix. + The number of columns in the A matrix. + Contains additional information on Q. Only used for the native solver + and can be null for the managed provider. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + The type of QR factorization to perform. + Rows must be greater or equal to columns. + + + + Solves A*X=B for X using the singular value decomposition of A. + + On entry, the M by N matrix to decompose. + The number of rows in the A matrix. + The number of columns in the A matrix. + The B matrix. + The number of columns of B. + On exit, the solution matrix. + + + + Computes the singular value decomposition of A. + + Compute the singular U and VT vectors or not. + On entry, the M by N matrix to decompose. On exit, A may be overwritten. + The number of rows in the A matrix. + The number of columns in the A matrix. + The singular values of A in ascending value. + If is true, on exit U contains the left + singular vectors. + If is true, on exit VT contains the transposed + right singular vectors. + This is equivalent to the GESVD LAPACK routine. + + + + Computes the eigenvalues and eigenvectors of a matrix. + + Whether the matrix is symmetric or not. + The order of the matrix. + The matrix to decompose. The length of the array must be order * order. + On output, the matrix contains the eigen vectors. The length of the array must be order * order. + On output, the eigen values (λ) of matrix in ascending value. The length of the array must . + On output, the block diagonal eigenvalue matrix. The length of the array must be order * order. + + + + Error codes return from the native OpenBLAS provider. + + + + + Unable to allocate memory. + + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + Uses and uses the value of + to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + Uses the value of to set whether the instance is thread safe. + + + + Construct a new random number generator with random seed. + + Uses + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The to use. + if set to true , the class is thread safe. + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Multiplicative congruential generator using a modulus of 2^31-1 and a multiplier of 1132489760. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Multiplicative congruential generator using a modulus of 2^59 and a multiplier of 13^13. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Random number generator using Mersenne Twister 19937 algorithm. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Mersenne twister constant. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + Uses the value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + A 32-bit combined multiple recursive generator with 2 components of order 3. + + Based off of P. L'Ecuyer, "Combined Multiple Recursive Random Number Generators," Operations Research, 44, 5 (1996), 816--822. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Represents a Parallel Additive Lagged Fibonacci pseudo-random number generator. + + + The type bases upon the implementation in the + Boost Random Number Library. + It uses the modulus 232 and by default the "lags" 418 and 1279. Some popular pairs are presented on + Wikipedia - Lagged Fibonacci generator. + + + + + Default value for the ShortLag + + + + + Default value for the LongLag + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The ShortLag value + TheLongLag value + + + + Gets the short lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Gets the long lag of the Lagged Fibonacci pseudo-random number generator. + + + + + Stores an array of random numbers + + + + + Stores an index for the random number array element that will be accessed next. + + + + + Fills the array with new unsigned random numbers. + + + Generated random numbers are 32-bit unsigned integers greater than or equal to 0 + and less than or equal to . + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + This class implements extension methods for the System.Random class. The extension methods generate + pseudo-random distributed numbers for types other than double and int32. + + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an array of uniform random bytes. + + The random number generator. + The size of the array to fill. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers greater than or equal to zero and less than . + + The random number generator. + The array to fill with random values. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Fills an array with uniform random 32-bit signed integers within the specified range. + + The random number generator. + The array to fill with random values. + Lower bound, inclusive. + Upper bound, exclusive. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative random number less than . + + The random number generator. + + A 64-bit signed integer greater than or equal to 0, and less than ; that is, + the range of return values includes 0 but not . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int32 range. + + The random number generator. + + A 32-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random number of the full Int64 range. + + The random number generator. + + A 64-bit signed integer of the full range, including 0, negative numbers, + and . + + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a nonnegative decimal floating point random number less than 1.0. + + The random number generator. + + A decimal floating point number greater than or equal to 0.0, and less than 1.0; that is, + the range of return values includes 0.0 but not 1.0. + + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Returns a random boolean. + + The random number generator. + + This extension is thread-safe if and only if called on an random number + generator provided by Math.NET Numerics or derived from the RandomSource class. + + + + + Provides a time-dependent seed value, matching the default behavior of System.Random. + WARNING: There is no randomness in this seed and quick repeated calls can cause + the same seed value. Do not use for cryptography! + + + + + Provides a seed based on time and unique GUIDs. + WARNING: There is only low randomness in this seed, but at least quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Provides a seed based on an internal random number generator (crypto if available), time and unique GUIDs. + WARNING: There is only medium randomness in this seed, but quick repeated + calls will result in different seed values. Do not use for cryptography! + + + + + Base class for random number generators. This class introduces a layer between + and the Math.Net Numerics random number generators to provide thread safety. + When used directly it use the System.Random as random number source. + + + + + Initializes a new instance of the class using + the value of to set whether + the instance is thread safe or not. + + + + + Initializes a new instance of the class. + + if set to true , the class is thread safe. + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Fills an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The array to fill with random values. + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + The size of the array to fill. + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than . + + + + + Returns a random number less then a specified maximum. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + A 32-bit signed integer less than . + is zero or negative. + + + + Returns a random number within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + A 32-bit signed integer greater than or equal to and less than ; that is, the range of return values includes but not . If equals , is returned. + + is greater than . + + + + Fills an array with random 32-bit signed integers greater than or equal to zero and less than . + + The array to fill with random values. + + + + Returns an array with random 32-bit signed integers greater than or equal to zero and less than . + + The size of the array to fill. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 1. + + + + Fills an array with random numbers within a specified range. + + The array to fill with random values. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an array with random 32-bit signed integers within the specified range. + + The size of the array to fill. + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Returns an infinite sequence of random 32-bit signed integers greater than or equal to zero and less than . + + + + + Returns an infinite sequence of random numbers within a specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive > minExclusive. + + + + Fills the elements of a specified array of bytes with random numbers. + + An array of bytes to contain random numbers. + is null. + + + + Returns a random number between 0.0 and 1.0. + + A double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than 2147483647 (). + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + A random number generator based on the class in the .NET library. + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Default instance, thread-safe. + + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Returns a random 32-bit signed integer within the specified range. + + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ 2 (not verified, must be ensured by caller). + + + + Returns a random 32-bit signed integer within the specified range. + + The inclusive lower bound of the random number returned. + The exclusive upper bound of the random number returned. Range: maxExclusive ≥ minExclusive + 2 (not verified, must be ensured by caller). + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fill an array with uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an array of uniform random numbers greater than or equal to 0.0 and less than 1.0. + WARNING: potentially very short random sequence length, can generate repeated partial sequences. + + Parallelized on large length, but also supports being called in parallel from multiple threads + + + + Returns an infinite sequence of uniform random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 1982 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (1982), "Algorithm AS 183: + An efficient and portable pseudo-random number generator". Applied Statistics 31 (1982) 188-190 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Wichmann-Hill’s 2006 combined multiplicative congruential generator. + + See: Wichmann, B. A. & Hill, I. D. (2006), "Generating good pseudo-random numbers". + Computational Statistics & Data Analysis 51:3 (2006) 1614-1622 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + + + + Initializes a new instance of the class. + + The seed value. + The seed is set to 1, if the zero is used as the seed. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Implements a multiply-with-carry Xorshift pseudo random number generator (RNG) specified in Marsaglia, George. (2003). Xorshift RNGs. + Xn = a * Xn−3 + c mod 2^32 + http://www.jstatsoft.org/v08/i14/paper + + + + + The default value for X1. + + + + + The default value for X2. + + + + + The default value for the multiplier. + + + + + The default value for the carry over. + + + + + The multiplier to compute a double-precision floating point number [0, 1) + + + + + Seed or last but three unsigned random number. + + + + + Last but two unsigned random number. + + + + + Last but one unsigned random number. + + + + + The value of the carry over. + + + + + The multiplier. + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Note: must be less than . + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class using + a seed based on time and unique GUIDs. + + if set to true , the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + If the seed value is zero, it is set to one. Uses the + value of to + set whether the instance is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + + Uses the default values of: + + a = 916905990 + c = 13579 + X1 = 77465321 + X2 = 362436069 + + + + + Initializes a new instance of the class. + + The seed value. + if set to true, the class is thread safe. + The multiply value + The initial carry value. + The initial value if X1. + The initial value if X2. + must be less than . + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Xoshiro256** pseudo random number generator. + A random number generator based on the class in the .NET library. + + + This is xoshiro256** 1.0, our all-purpose, rock-solid generator. It has + excellent(sub-ns) speed, a state space(256 bits) that is large enough + for any parallel application, and it passes all tests we are aware of. + + For generating just floating-point numbers, xoshiro256+ is even faster. + + The state must be seeded so that it is not everywhere zero.If you have + a 64-bit seed, we suggest to seed a splitmix64 generator and use its + output to fill s. + + For further details see: + David Blackman & Sebastiano Vigna (2018), "Scrambled Linear Pseudorandom Number Generators". + https://arxiv.org/abs/1805.01407 + + + + + Construct a new random number generator with a random seed. + + + + + Construct a new random number generator with random seed. + + if set to true , the class is thread safe. + + + + Construct a new random number generator with random seed. + + The seed value. + + + + Construct a new random number generator with random seed. + + The seed value. + if set to true , the class is thread safe. + + + + Returns a random double-precision floating point number greater than or equal to 0.0, and less than 1.0. + + + + + Returns a random 32-bit signed integer greater than or equal to zero and less than + + + + + Fills the elements of a specified array of bytes with random numbers in full range, including zero and 255 (). + + + + + Returns a random N-bit signed integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 32 (not verified). + + + + + Returns a random N-bit signed long integer greater than or equal to zero and less than 2^N. + N (bit count) is expected to be greater than zero and less than 64 (not verified). + + + + + Fills an array with random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an array of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads. + + + + Returns an infinite sequence of random numbers greater than or equal to 0.0 and less than 1.0. + + Supports being called in parallel from multiple threads, but the result must be enumerated from a single thread each. + + + + Splitmix64 RNG. + + RNG state. This can take any value, including zero. + A new random UInt64. + + Splitmix64 produces equidistributed outputs, thus if a zero is generated then the + next zero will be after a further 2^64 outputs. + + + + + Bisection root-finding algorithm. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy for both the root and the function value at the root. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Algorithm by Brent, Van Wijngaarden, Dekker et al. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Guess for the low value of the range where the root is supposed to be. Will be expanded if needed. + Guess for the high value of the range where the root is supposed to be. Will be expanded if needed. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Factor at which to expand the bounds, if needed. Default 1.6. + Maximum number of expand iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Helper method useful for preventing rounding errors. + a*sign(b) + + + + Algorithm by Broyden. + Implementation inspired by Press, Teukolsky, Vetterling, and Flannery, "Numerical Recipes in C", 2nd edition, Cambridge University Press + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Relative step size for calculating the Jacobian matrix at first step. Default 1.0e-4 + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + Relative step size for calculating the Jacobian matrix at first step. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + Find a solution of the equation f(x)=0. + The function to find roots from. + Initial guess of the root. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Must be greater than 0. + Maximum number of iterations. Usually 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Helper method to calculate an approximation of the Jacobian. + + The function. + The argument (initial guess). + The result (of initial guess). + Relative step size for calculating the Jacobian. + + + + Finds roots to the cubic equation x^3 + a2*x^2 + a1*x + a0 = 0 + Implements the cubic formula in http://mathworld.wolfram.com/CubicFormula.html + + + + + Q and R are transformed variables. + + + + + n^(1/3) - work around a negative double raised to (1/3) + + + + + Find all real-valued roots of the cubic equation a0 + a1*x + a2*x^2 + x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Find all three complex roots of the cubic equation d + c*x + b*x^2 + a*x^3 = 0. + Note the special coefficient order ascending by exponent (consistent with polynomials). + + + + + Pure Newton-Raphson root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + Initial guess of the root. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Robust Newton-Raphson root-finding algorithm that falls back to bisection when overshooting or converging too slow, or to subdivision on lacking bracketing. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Default 20. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first derivative of the function to find roots from. + The low value of the range where the root is supposed to be. + The high value of the range where the root is supposed to be. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + How many parts an interval should be split into for zero crossing scanning in case of lacking bracketing. Example: 20. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false. + + + + Pure Secant root-finding algorithm without any recovery measures in cases it behaves badly. + The algorithm aborts immediately if the root leaves the bound interval. + + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MinValue. + The high value of the range where the root is supposed to be. Aborts if it leaves the interval. Default MaxValue. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Default 1e-8. Must be greater than 0. + Maximum number of iterations. Default 100. + Returns the root with the specified accuracy. + + + + Find a solution of the equation f(x)=0. + The function to find roots from. + The first guess of the root within the bounds specified. + The second guess of the root within the bounds specified. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + The low value of the range where the root is supposed to be. Aborts if it leaves the interval. + Desired accuracy. The root will be refined until the accuracy or the maximum number of iterations is reached. Example: 1e-14. Must be greater than 0. + Maximum number of iterations. Example: 100. + The root that was found, if any. Undefined if the function returns false. + True if a root with the specified accuracy was found, else false + + + Detect a range containing at least one root. + The function to detect roots from. + Lower value of the range. + Upper value of the range + The growing factor of research. Usually 1.6. + Maximum number of iterations. Usually 50. + True if the bracketing operation succeeded, false otherwise. + This iterative methods stops when two values with opposite signs are found. + + + + Sorting algorithms for single, tuple and triple lists. + + + + + Sort a list of keys, in place using the quick sort algorithm using the quick sort algorithm. + + The type of elements in the key list. + List to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + Comparison, defining the sort order. + + + + Sort a range of a list of keys, in place using the quick sort algorithm. + + The type of element in the list. + List to sort. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the item list. + List to sort. + List to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys, items1 and items2 with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the key list. + The type of elements in the first item list. + The type of elements in the second item list. + List to sort. + First list to permute the same way as the key list. + Second list to permute the same way as the key list. + The zero-based starting index of the range to sort. + The length of the range to sort. + Comparison, defining the sort order. + + + + Sort a list of keys and items with respect to the keys, in place using the quick sort algorithm. + + The type of elements in the primary list. + The type of elements in the secondary list. + List to sort. + List to sort on duplicate primary items, and permute the same way as the key list. + Comparison, defining the primary sort order. + Comparison, defining the secondary sort order. + + + + Recursive implementation for an in place quick sort on a list. + + The type of the list on which the quick sort is performed. + The list which is sorted using quick sort. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on a list while reordering one other list accordingly. + + The type of the list on which the quick sort is performed. + The type of the list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on one list while reordering two other lists accordingly. + + The type of the list on which the quick sort is performed. + The type of the first list which is automatically reordered accordingly. + The type of the second list which is automatically reordered accordingly. + The list which is sorted using quick sort. + The first list which is automatically reordered accordingly. + The second list which is automatically reordered accordingly. + The method with which to compare two elements of the quick sort. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Recursive implementation for an in place quick sort on the primary and then by the secondary list while reordering one secondary list accordingly. + + The type of the primary list. + The type of the secondary list. + The list which is sorted using quick sort. + The list which is sorted secondarily (on primary duplicates) and automatically reordered accordingly. + The method with which to compare two elements of the primary list. + The method with which to compare two elements of the secondary list. + The left boundary of the quick sort. + The right boundary of the quick sort. + + + + Performs an in place swap of two elements in a list. + + The type of elements stored in the list. + The list in which the elements are stored. + The index of the first element of the swap. + The index of the second element of the swap. + + + + This partial implementation of the SpecialFunctions class contains all methods related to the Airy functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Bessel functions. + + + This partial implementation of the SpecialFunctions class contains all methods related to the error function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the Hankel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the harmonic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the logistic function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the modified Bessel function. + + + This partial implementation of the SpecialFunctions class contains all methods related to the spherical Bessel functions. + + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the Airy function Ai. + AiryAi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Ai. + + + + Returns the exponentially scaled Airy function Ai. + ScaledAiryAi(z) is given by Exp(zta) * AiryAi(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of Airy function Ai + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of Airy function Ai. + + + + Returns the derivative of the Airy function Ai. + AiryAiPrime(z) is defined as d/dz AiryAi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Ai. + + + + Returns the exponentially scaled derivative of the Airy function Ai. + ScaledAiryAiPrime(z) is given by Exp(zta) * AiryAiPrime(z), where zta = (2/3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Ai. + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi(z). + + + + Returns the Airy function Bi. + AiryBi(z) is a solution to the Airy equation, y'' - y * z = 0. + + The value to compute the Airy function of. + The Airy function Bi. + + + + Returns the exponentially scaled Airy function Bi. + ScaledAiryBi(z) is given by Exp(-Abs(zta.Real)) * AiryBi(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the Airy function of. + The exponentially scaled Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the derivative of the Airy function Bi. + AiryBiPrime(z) is defined as d/dz AiryBi(z). + + The value to compute the derivative of the Airy function of. + The derivative of the Airy function Bi. + + + + Returns the exponentially scaled derivative of the Airy function Bi. + ScaledAiryBiPrime(z) is given by Exp(-Abs(zta.Real)) * AiryBiPrime(z) where zta = (2 / 3) * z * Sqrt(z). + + The value to compute the derivative of the Airy function of. + The exponentially scaled derivative of the Airy function Bi. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the first kind. + BesselJ(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the first kind. + + + + Returns the exponentially scaled Bessel function of the first kind. + ScaledBesselJ(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselJ(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the first kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * Y(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the Bessel function of the second kind. + BesselY(n, z) is a solution to the Bessel differential equation. + + The order of the Bessel function. + The value to compute the Bessel function of. + The Bessel function of the second kind. + + + + Returns the exponentially scaled Bessel function of the second kind. + ScaledBesselY(n, z) is given by Exp(-Abs(z.Imaginary)) * BesselY(n, z). + + The order of the Bessel function. + The value to compute the Bessel function of. + The exponentially scaled Bessel function of the second kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the first kind. + BesselI(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the first kind. + + + + Returns the exponentially scaled modified Bessel function of the first kind. + ScaledBesselI(n, z) is given by Exp(-Abs(z.Real)) * BesselI(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the first kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Returns the modified Bessel function of the second kind. + BesselK(n, z) is a solution to the modified Bessel differential equation. + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The modified Bessel function of the second kind. + + + + Returns the exponentially scaled modified Bessel function of the second kind. + ScaledBesselK(n, z) is given by Exp(z) * BesselK(n, z). + + The order of the modified Bessel function. + The value to compute the modified Bessel function of. + The exponentially scaled modified Bessel function of the second kind. + + + + Computes the logarithm of the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The logarithm of the Euler Beta function evaluated at z,w. + If or are not positive. + + + + Computes the Euler Beta function. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The Euler Beta function evaluated at z,w. + If or are not positive. + + + + Returns the lower incomplete (unregularized) beta function + B(a,b,x) = int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The lower incomplete (unregularized) beta function. + + + + Returns the regularized lower incomplete beta function + I_x(a,b) = 1/Beta(a,b) * int(t^(a-1)*(1-t)^(b-1),t=0..x) for real a > 0, b > 0, 1 >= x >= 0. + + The first Beta parameter, a positive real number. + The second Beta parameter, a positive real number. + The upper limit of the integral. + The regularized lower incomplete beta function. + + + + ************************************** + COEFFICIENTS FOR METHOD ErfImp * + ************************************** + + Polynomial coefficients for a numerator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a denominator of ErfImp + calculation for Erf(x) in the interval [1e-10, 0.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [0.75, 1.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [1.25, 2.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [2.25, 3.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [3.5, 5.25]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [5.25, 8]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [8, 11.5]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [11.5, 17]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [17, 24]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [24, 38]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [38, 60]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [60, 85]. + + + + Polynomial coefficients for a numerator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + Polynomial coefficients for a denominator in ErfImp + calculation for Erfc(x) in the interval [85, 110]. + + + + + ************************************** + COEFFICIENTS FOR METHOD ErfInvImp * + ************************************** + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0, 0.5]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.5, 0.75]. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x less than 3. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 3 and 6. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 6 and 18. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x between 18 and 44. + + + + Polynomial coefficients for a numerator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Polynomial coefficients for a denominator of ErfInvImp + calculation for Erf^-1(z) in the interval [0.75, 1] with x greater than 44. + + + + Calculates the error function. + The value to evaluate. + the error function evaluated at given value. + + + returns 1 if x == double.PositiveInfinity. + returns -1 if x == double.NegativeInfinity. + + + + + Calculates the complementary error function. + The value to evaluate. + the complementary error function evaluated at given value. + + + returns 0 if x == double.PositiveInfinity. + returns 2 if x == double.NegativeInfinity. + + + + + Calculates the inverse error function evaluated at z. + The inverse error function evaluated at given value. + + + returns double.PositiveInfinity if z >= 1.0. + returns double.NegativeInfinity if z <= -1.0. + + + Calculates the inverse error function evaluated at z. + value to evaluate. + the inverse error function evaluated at Z. + + + + Implementation of the error function. + + Where to evaluate the error function. + Whether to compute 1 - the error function. + the error function. + + + Calculates the complementary inverse error function evaluated at z. + The complementary inverse error function evaluated at given value. + We have tested this implementation against the arbitrary precision mpmath library + and found cases where we can only guarantee 9 significant figures correct. + + returns double.PositiveInfinity if z <= 0.0. + returns double.NegativeInfinity if z >= 2.0. + + + calculates the complementary inverse error function evaluated at z. + value to evaluate. + the complementary inverse error function evaluated at Z. + + + + The implementation of the inverse error function. + + First intermediate parameter. + Second intermediate parameter. + Third intermediate parameter. + the inverse error function. + + + + Computes the generalized Exponential Integral function (En). + + The argument of the Exponential Integral function. + Integer power of the denominator term. Generalization index. + The value of the Exponential Integral function. + + This implementation of the computation of the Exponential Integral function follows the derivation in + "Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55", Abramowitz, M., and Stegun, I.A. 1964, reprinted 1968 by + Dover Publications, New York), Chapters 6, 7, and 26. + AND + "Advanced mathematical methods for scientists and engineers", Bender, Carl M.; Steven A. Orszag (1978). page 253 + + + for x > 1 uses continued fraction approach that is often used to compute incomplete gamma. + for 0 < x <= 1 uses Taylor series expansion + + Our unit tests suggest that the accuracy of the Exponential Integral function is correct up to 13 floating point digits. + + + + + Computes the factorial function x -> x! of an integer number > 0. The function can represent all number up + to 22! exactly, all numbers up to 170! using a double representation. All larger values will overflow. + + A value value! for value > 0 + + If you need to multiply or divide various such factorials, consider using the logarithmic version + instead so you can add instead of multiply and subtract instead of divide, and + then exponentiate the result using . This will also circumvent the problem that + factorials become very large even for small parameters. + + + + + + Computes the factorial of an integer. + + + + + Computes the logarithmic factorial function x -> ln(x!) of an integer number > 0. + + A value value! for value > 0 + + + + Computes the binomial coefficient: n choose k. + + A nonnegative value n. + A nonnegative value h. + The binomial coefficient: n choose k. + + + + Computes the natural logarithm of the binomial coefficient: ln(n choose k). + + A nonnegative value n. + A nonnegative value h. + The logarithmic binomial coefficient: ln(n choose k). + + + + Computes the multinomial coefficient: n choose n1, n2, n3, ... + + A nonnegative value n. + An array of nonnegative values that sum to . + The multinomial coefficient. + if is . + If or any of the are negative. + If the sum of all is not equal to . + + + + The order of the approximation. + + + + + Auxiliary variable when evaluating the function. + + + + + Polynomial coefficients for the approximation. + + + + + Computes the logarithm of the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which achieves an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + Our unit tests suggest that the accuracy of the Gamma function is correct up to 14 floating point digits. + + + + + Computes the Gamma function. + + The argument of the gamma function. + The logarithm of the gamma function. + + + This implementation of the computation of the gamma and logarithm of the gamma function follows the derivation in + "An Analysis Of The Lanczos Gamma Approximation", Glendon Ralph Pugh, 2004. + We use the implementation listed on p. 116 which should achieve an accuracy of 16 floating point digits. Although 16 digit accuracy + should be sufficient for double values, improving accuracy is possible (see p. 126 in Pugh). + + Our unit tests suggest that the accuracy of the Gamma function is correct up to 13 floating point digits. + + + + + Returns the upper incomplete regularized gamma function + Q(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete regularized gamma function. + + + + Returns the upper incomplete gamma function + Gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The lower integral limit. + The upper incomplete gamma function. + + + + Returns the lower incomplete gamma function + gamma(a,x) = int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the lower incomplete regularized gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0. + + The argument for the gamma function. + The upper integral limit. + The lower incomplete gamma function. + + + + Returns the inverse P^(-1) of the regularized lower incomplete gamma function + P(a,x) = 1/Gamma(a) * int(exp(-t)t^(a-1),t=0..x) for real a > 0, x > 0, + such that P^(-1)(a,P(a,x)) == x. + + + + + Computes the Digamma function which is mathematically defined as the derivative of the logarithm of the gamma function. + This implementation is based on + Jose Bernardo + Algorithm AS 103: + Psi ( Digamma ) Function, + Applied Statistics, + Volume 25, Number 3, 1976, pages 315-317. + Using the modifications as in Tom Minka's lightspeed toolbox. + + The argument of the digamma function. + The value of the DiGamma function at . + + + + Computes the inverse Digamma function: this is the inverse of the logarithm of the gamma function. This function will + only return solutions that are positive. + This implementation is based on the bisection method. + + The argument of the inverse digamma function. + The positive solution to the inverse DiGamma function at . + + + + Computes the Rising Factorial (Pochhammer function) x -> (x)n, n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Rising Factorial for x and n + + + + Computes the Falling Factorial (Pochhammer function) x -> x(n), n>= 0. see: https://en.wikipedia.org/wiki/Falling_and_rising_factorials + + The real value of the Falling Factorial for x and n + + + + A generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by n is a rational function of n. + This is the most common pFq(a1, ..., ap; b1,...,bq; z) representation + see: https://en.wikipedia.org/wiki/Generalized_hypergeometric_function + + The list of coefficients in the numerator + The list of coefficients in the denominator + The variable in the power series + The value of the Generalized HyperGeometric Function. + + + + Returns the Hankel function of the first kind. + HankelH1(n, z) is defined as BesselJ(n, z) + j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the first kind. + + + + Returns the exponentially scaled Hankel function of the first kind. + ScaledHankelH1(n, z) is given by Exp(-z * j) * HankelH1(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the first kind. + + + + Returns the Hankel function of the second kind. + HankelH2(n, z) is defined as BesselJ(n, z) - j * BesselY(n, z). + + The order of the Hankel function. + The value to compute the Hankel function of. + The Hankel function of the second kind. + + + + Returns the exponentially scaled Hankel function of the second kind. + ScaledHankelH2(n, z) is given by Exp(z * j) * HankelH2(n, z) where j = Sqrt(-1). + + The order of the Hankel function. + The value to compute the Hankel function of. + The exponentially scaled Hankel function of the second kind. + + + + Computes the 'th Harmonic number. + + The Harmonic number which needs to be computed. + The t'th Harmonic number. + + + + Compute the generalized harmonic number of order n of m. (1 + 1/2^m + 1/3^m + ... + 1/n^m) + + The order parameter. + The power parameter. + General Harmonic number. + + + + Returns the Kelvin function of the first kind. + KelvinBe(nu, x) is given by BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(nu, x) and KelvinBei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function of the first kind. + + + + Returns the Kelvin function ber. + KelvinBer(nu, x) is given by the real part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function ber. + KelvinBer(x) is given by the real part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBer(x) is equivalent to KelvinBer(0, x). + + The value to compute the Kelvin function of. + The Kelvin function ber. + + + + Returns the Kelvin function bei. + KelvinBei(nu, x) is given by the imaginary part of BesselJ(nu, j * sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the Kelvin function bei. + KelvinBei(x) is given by the imaginary part of BesselJ(0, j * sqrt(j) * x) where j = sqrt(-1). + KelvinBei(x) is equivalent to KelvinBei(0, x). + + The value to compute the Kelvin function of. + The Kelvin function bei. + + + + Returns the derivative of the Kelvin function ber. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function ber + + + + Returns the derivative of the Kelvin function ber. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ber. + + + + Returns the derivative of the Kelvin function bei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + the derivative of the Kelvin function bei. + + + + Returns the derivative of the Kelvin function bei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function bei. + + + + Returns the Kelvin function of the second kind + KelvinKe(nu, x) is given by Exp(-nu * pi * j / 2) * BesselK(nu, x * sqrt(j)) where j = sqrt(-1). + KelvinKer(nu, x) and KelvinKei(nu, x) are the real and imaginary parts of the KelvinBe(nu, x) + + The order of the Kelvin function. + The value to calculate the kelvin function of, + + + + + Returns the Kelvin function ker. + KelvinKer(nu, x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function ker. + KelvinKer(x) is given by the real part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKer(x) is equivalent to KelvinKer(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function ker. + + + + Returns the Kelvin function kei. + KelvinKei(nu, x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(nu, sqrt(j) * x) where j = sqrt(-1). + + the order of the the Kelvin function. + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the Kelvin function kei. + KelvinKei(x) is given by the imaginary part of Exp(-nu * pi * j / 2) * BesselK(0, sqrt(j) * x) where j = sqrt(-1). + KelvinKei(x) is equivalent to KelvinKei(0, x). + + The non-negative real value to compute the Kelvin function of. + The Kelvin function kei. + + + + Returns the derivative of the Kelvin function ker. + + The order of the Kelvin function. + The non-negative real value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function ker. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function ker. + + + + Returns the derivative of the Kelvin function kei. + + The order of the Kelvin function. + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Returns the derivative of the Kelvin function kei. + + The value to compute the derivative of the Kelvin function of. + The derivative of the Kelvin function kei. + + + + Computes the logistic function. see: http://en.wikipedia.org/wiki/Logistic + + The parameter for which to compute the logistic function. + The logistic function of . + + + + Computes the logit function, the inverse of the sigmoid logistic function. see: http://en.wikipedia.org/wiki/Logit + + The parameter for which to compute the logit function. This number should be + between 0 and 1. + The logarithm of divided by 1.0 - . + + + + ************************************** + COEFFICIENTS FOR METHODS bessi0 * + ************************************** + + Chebyshev coefficients for exp(-x) I0(x) + in the interval [0, 8]. + + lim(x->0){ exp(-x) I0(x) } = 1. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I0(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I0(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessi1 * + ************************************** + + Chebyshev coefficients for exp(-x) I1(x) / x + in the interval [0, 8]. + + lim(x->0){ exp(-x) I1(x) / x } = 1/2. + + + + Chebyshev coefficients for exp(-x) sqrt(x) I1(x) + in the inverted interval [8, infinity]. + + lim(x->inf){ exp(-x) sqrt(x) I1(x) } = 1/sqrt(2pi). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk0, bessk0e * + ************************************** + + Chebyshev coefficients for K0(x) + log(x/2) I0(x) + in the interval [0, 2]. The odd order coefficients are all + zero; only the even order coefficients are listed. + + lim(x->0){ K0(x) + log(x/2) I0(x) } = -EUL. + + + + Chebyshev coefficients for exp(x) sqrt(x) K0(x) + in the inverted interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K0(x) } = sqrt(pi/2). + + + + + ************************************** + COEFFICIENTS FOR METHODS bessk1, bessk1e * + ************************************** + + Chebyshev coefficients for x(K1(x) - log(x/2) I1(x)) + in the interval [0, 2]. + + lim(x->0){ x(K1(x) - log(x/2) I1(x)) } = 1. + + + + Chebyshev coefficients for exp(x) sqrt(x) K1(x) + in the interval [2, infinity]. + + lim(x->inf){ exp(x) sqrt(x) K1(x) } = sqrt(pi/2). + + + + Returns the modified Bessel function of first kind, order 0 of the argument. +

+ The function is defined as i0(x) = j0( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of first kind, + order 1 of the argument. +

+ The function is defined as i1(x) = -i j1( ix ). +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the modified Bessel function of the second kind + of order 0 of the argument. +

+ The range is partitioned into the two intervals [0, 8] and + (8, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 0 of the argument. + + The value to compute the Bessel function of. + + + + Returns the modified Bessel function of the second kind + of order 1 of the argument. +

+ The range is partitioned into the two intervals [0, 2] and + (2, infinity). Chebyshev polynomial expansions are employed + in each interval. +

+ The value to compute the Bessel function of. + +
+ + Returns the exponentially scaled modified Bessel function + of the second kind of order 1 of the argument. +

+ k1e(x) = exp(x) * k1(x). +

+ The value to compute the Bessel function of. + +
+ + + Returns the modified Struve function of order 0. + + The value to compute the function of. + + + + Returns the modified Struve function of order 1. + + The value to compute the function of. + + + + Returns the difference between the Bessel I0 and Struve L0 functions. + + The value to compute the function of. + + + + Returns the difference between the Bessel I1 and Struve L1 functions. + + The value to compute the function of. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the first kind. + SphericalBesselJ(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselJ(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the first kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Returns the spherical Bessel function of the second kind. + SphericalBesselY(n, z) is given by Sqrt(pi/2) / Sqrt(z) * BesselY(n + 1/2, z). + + The order of the spherical Bessel function. + The value to compute the spherical Bessel function of. + The spherical Bessel function of the second kind. + + + + Numerically stable exponential minus one, i.e. x -> exp(x)-1 + + A number specifying a power. + Returns exp(power)-1. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Numerically stable hypotenuse of a right angle triangle, i.e. (a,b) -> sqrt(a^2 + b^2) + + The length of side a of the triangle. + The length of side b of the triangle. + Returns sqrt(a2 + b2) without underflow/overflow. + + + + Evaluation functions, useful for function approximation. + + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Evaluate a polynomial at point x. + Coefficients are ordered by power with power k at index k. + Example: coefficients [3,-1,2] represent y=2x^2-x+3. + + The location where to evaluate the polynomial at. + The coefficients of the polynomial, coefficient for power k at index k. + + + + Numerically stable series summation + + provides the summands sequentially + Sum + + + Evaluates the series of Chebyshev polynomials Ti at argument x/2. + The series is given by +
+                  N-1
+                   - '
+            y  =   >   coef[i] T (x/2)
+                   -            i
+                  i=0
+            
+ Coefficients are stored in reverse order, i.e. the zero + order term is last in the array. Note N is the number of + coefficients, not the order. +

+ If coefficients are for the interval a to b, x must + have been transformed to x -> 2(2x - b - a)/(b-a) before + entering the routine. This maps x from (a, b) to (-1, 1), + over which the Chebyshev polynomials are defined. +

+ If the coefficients are for the inverted interval, in + which (a, b) is mapped to (1/b, 1/a), the transformation + required is x -> 2(2ab/x - b - a)/(b-a). If b is infinity, + this becomes x -> 4a/x - 1. +

+ SPEED: +

+ Taking advantage of the recurrence properties of the + Chebyshev polynomials, the routine requires one more + addition per loop than evaluating a nested polynomial of + the same degree. +

+ The coefficients of the polynomial. + Argument to the polynomial. + + Reference: https://bpm2.svn.codeplex.com/svn/Common.Numeric/Arithmetic.cs +

+ Marked as Deprecated in + http://people.apache.org/~isabel/mahout_site/mahout-matrix/apidocs/org/apache/mahout/jet/math/Arithmetic.html + + + +

+ Summation of Chebyshev polynomials, using the Clenshaw method with Reinsch modification. + + The no. of terms in the sequence. + The coefficients of the Chebyshev series, length n+1. + The value at which the series is to be evaluated. + + ORIGINAL AUTHOR: + Dr. Allan J. MacLeod; Dept. of Mathematics and Statistics, University of Paisley; High St., PAISLEY, SCOTLAND + REFERENCES: + "An error analysis of the modified Clenshaw method for evaluating Chebyshev and Fourier series" + J. Oliver, J.I.M.A., vol. 20, 1977, pp379-391 + +
+ + + Valley-shaped Rosenbrock function for 2 dimensions: (x,y) -> (1-x)^2 + 100*(y-x^2)^2. + This function has a global minimum at (1,1) with f(1,1) = 0. + Common range: [-5,10] or [-2.048,2.048]. + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Valley-shaped Rosenbrock function for 2 or more dimensions. + This function have a global minimum of all ones and, for 8 > N > 3, a local minimum at (-1,1,...,1). + + + https://en.wikipedia.org/wiki/Rosenbrock_function + http://www.sfu.ca/~ssurjano/rosen.html + + + + + Himmelblau, a multi-modal function: (x,y) -> (x^2+y-11)^2 + (x+y^2-7)^2 + This function has 4 global minima with f(x,y) = 0. + Common range: [-6,6]. + Named after David Mautner Himmelblau + + + https://en.wikipedia.org/wiki/Himmelblau%27s_function + + + + + Rastrigin, a highly multi-modal function with many local minima. + Global minimum of all zeros with f(0) = 0. + Common range: [-5.12,5.12]. + + + https://en.wikipedia.org/wiki/Rastrigin_function + http://www.sfu.ca/~ssurjano/rastr.html + + + + + Drop-Wave, a multi-modal and highly complex function with many local minima. + Global minimum of all zeros with f(0) = -1. + Common range: [-5.12,5.12]. + + + http://www.sfu.ca/~ssurjano/drop.html + + + + + Ackley, a function with many local minima. It is nearly flat in outer regions but has a large hole at the center. + Global minimum of all zeros with f(0) = 0. + Common range: [-32.768, 32.768]. + + + http://www.sfu.ca/~ssurjano/ackley.html + + + + + Bowl-shaped first Bohachevsky function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-100, 100] + + + http://www.sfu.ca/~ssurjano/boha.html + + + + + Plate-shaped Matyas function. + Global minimum of all zeros with f(0,0) = 0. + Common range: [-10, 10]. + + + http://www.sfu.ca/~ssurjano/matya.html + + + + + Valley-shaped six-hump camel back function. + Two global minima and four local minima. Global minima with f(x) ) -1.0316 at (0.0898,-0.7126) and (-0.0898,0.7126). + Common range: x in [-3,3], y in [-2,2]. + + + http://www.sfu.ca/~ssurjano/camel6.html + + + + + Statistics operating on arrays assumed to be unsorted. + WARNING: Methods with the Inplace-suffix may modify the data array by reordering its entries. + + + + + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the smallest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the largest absolute value from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the geometric mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the harmonic mean of the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as unsorted array. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as unsorted array. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample array, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample arrays. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample array. + Second sample array. + + + + Evaluates the population covariance from the full population provided as two arrays. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population array. + Second population array. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the unsorted data array. + Returns NaN if data is empty or any entry is NaN. + + Sample array, no sorting is assumed. + + + + Returns the order statistic (order 1..N) from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the unsorted data array. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the p-Percentile value from the unsorted data array. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the third quartile value from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the inter-quartile range from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the unsorted data array. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the unsorted data array. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + Sample array, no sorting is assumed. Will be reordered. + Quantile selector, between 0.0 and 1.0 (inclusive) + Quantile definition, to choose what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the unsorted data array. + The rank definition can be specified to be compatible + with an existing system. + WARNING: Works inplace and can thus causes the data array to be reordered. + + + + + A class with correlation measures between two datasets. + + + + + Auto-correlation function (ACF) based on FFT for all possible lags k. + + Data array to calculate auto correlation for. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function (ACF) based on FFT for lags between kMin and kMax. + + The data array to calculate auto correlation for. + Max lag to calculate ACF for must be positive and smaller than x.Length. + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length. + An array with the ACF as a function of the lags k. + + + + Auto-correlation function based on FFT for lags k. + + The data array to calculate auto correlation for. + Array with lags to calculate ACF for. + An array with the ACF as a function of the lags k. + + + + The internal method for calculating the auto-correlation. + + The data array to calculate auto-correlation for + Min lag to calculate ACF for (0 = no shift with acf=1) must be zero or positive and smaller than x.Length + Max lag (EXCLUSIVE) to calculate ACF for must be positive and smaller than x.Length + An array with the ACF as a function of the lags k. + + + + Computes the Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + The Pearson product-moment correlation coefficient. + + + + Computes the Weighted Pearson Product-Moment Correlation coefficient. + + Sample data A. + Sample data B. + Corresponding weights of data. + The Weighted Pearson product-moment correlation coefficient. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Array of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Pearson Product-Moment Correlation matrix. + + Enumerable of sample data vectors. + The Pearson product-moment correlation matrix. + + + + Computes the Spearman Ranked Correlation coefficient. + + Sample data series A. + Sample data series B. + The Spearman ranked correlation coefficient. + + + + Computes the Spearman Ranked Correlation matrix. + + Array of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the Spearman Ranked Correlation matrix. + + Enumerable of sample data vectors. + The Spearman ranked correlation matrix. + + + + Computes the basic statistics of data set. The class meets the + NIST standard of accuracy for mean, variance, and standard deviation + (the only statistics they provide exact values for) and exceeds them + in increased accuracy mode. + Recommendation: consider to use RunningStatistics instead. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Initializes a new instance of the class. + + The sample data. + + If set to true, increased accuracy mode used. + Increased accuracy mode uses types for internal calculations. + + + Don't use increased accuracy for data sets containing large values (in absolute value). + This may cause the calculations to overflow. + + + + + Gets the size of the sample. + + The size of the sample. + + + + Gets the sample mean. + + The sample mean. + + + + Gets the unbiased population variance estimator (on a dataset of size N will use an N-1 normalizer). + + The sample variance. + + + + Gets the unbiased population standard deviation (on a dataset of size N will use an N-1 normalizer). + + The sample standard deviation. + + + + Gets the sample skewness. + + The sample skewness. + Returns zero if is less than three. + + + + Gets the sample kurtosis. + + The sample kurtosis. + Returns zero if is less than four. + + + + Gets the maximum sample value. + + The maximum sample value. + + + + Gets the minimum sample value. + + The minimum sample value. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of data values. + + A sequence of datapoints. + + + + Computes descriptive statistics from a stream of nullable data values. + + A sequence of datapoints. + + + + Internal use. Method use for setting the statistics. + + For setting Mean. + For setting Variance. + For setting Skewness. + For setting Kurtosis. + For setting Minimum. + For setting Maximum. + For setting Count. + + + + A consists of a series of s, + each representing a region limited by a lower bound (exclusive) and an upper bound (inclusive). + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + This IComparer performs comparisons between a point and a bucket. + + + + + Compares a point and a bucket. The point will be encapsulated in a bucket with width 0. + + The first bucket to compare. + The second bucket to compare. + -1 when the point is less than this bucket, 0 when it is in this bucket and 1 otherwise. + + + + Lower Bound of the Bucket. + + + + + Upper Bound of the Bucket. + + + + + The number of datapoints in the bucket. + + + Value may be NaN if this was constructed as a argument. + + + + + Initializes a new instance of the Bucket class. + + + + + Constructs a Bucket that can be used as an argument for a + like when performing a Binary search. + + Value to look for + + + + Creates a copy of the Bucket with the lowerbound, upperbound and counts exactly equal. + + A cloned Bucket object. + + + + Width of the Bucket. + + + + + True if this is a single point argument for + when performing a Binary search. + + + + + Default comparer. + + + + + This method check whether a point is contained within this bucket. + + The point to check. + + 0 if the point falls within the bucket boundaries; + -1 if the point is smaller than the bucket, + +1 if the point is larger than the bucket. + + + + Comparison of two disjoint buckets. The buckets cannot be overlapping. + + + 0 if UpperBound and LowerBound are bit-for-bit equal + 1 if This bucket is lower that the compared bucket + -1 otherwise + + + + + Checks whether two Buckets are equal. + + + UpperBound and LowerBound are compared bit-for-bit, but This method tolerates a + difference in Count given by . + + + + + Provides a hash code for this bucket. + + + + + Formats a human-readable string for this bucket. + + + + + A class which computes histograms of data. + + + + + Contains all the Buckets of the Histogram. + + + + + Indicates whether the elements of buckets are currently sorted. + + + + + Initializes a new instance of the Histogram class. + + + + + Constructs a Histogram with a specific number of equally sized buckets. The upper and lower bound of the histogram + will be set to the smallest and largest datapoint. + + The data sequence to build a histogram on. + The number of buckets to use. + + + + Constructs a Histogram with a specific number of equally sized buckets. + + The data sequence to build a histogram on. + The number of buckets to use. + The histogram lower bound. + The histogram upper bound. + + + + Add one data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The datapoint which we want to add. + + + + Add a sequence of data point to the histogram. If the datapoint falls outside the range of the histogram, + the lowerbound or upperbound will automatically adapt. + + The sequence of datapoints which we want to add. + + + + Adds a Bucket to the Histogram. + + + + + Sort the buckets if needed. + + + + + Returns the Bucket that contains the value v. + + The point to search the bucket for. + A copy of the bucket containing point . + + + + Returns the index in the Histogram of the Bucket + that contains the value v. + + The point to search the bucket index for. + The index of the bucket containing the point. + + + + Returns the lower bound of the histogram. + + + + + Returns the upper bound of the histogram. + + + + + Gets the n'th bucket. + + The index of the bucket to be returned. + A copy of the n'th bucket. + + + + Gets the number of buckets. + + + + + Gets the total number of datapoints in the histogram. + + + + + Prints the buckets contained in the . + + + + + Kernel density estimation (KDE). + + + + + Estimate the probability density function of a random variable. + + + The routine assumes that the provided kernel is well defined, i.e. a real non-negative function that integrates to 1. + + + + + Estimate the probability density function of a random variable with a Gaussian kernel. + + + + + Estimate the probability density function of a random variable with an Epanechnikov kernel. + The Epanechnikov kernel is optimal in a mean square error sense. + + + + + Estimate the probability density function of a random variable with a uniform kernel. + + + + + Estimate the probability density function of a random variable with a triangular kernel. + + + + + A Gaussian kernel (PDF of Normal distribution with mean 0 and variance 1). + This kernel is the default. + + + + + Epanechnikov Kernel: + x => Math.Abs(x) <= 1.0 ? 3.0/4.0(1.0-x^2) : 0.0 + + + + + Uniform Kernel: + x => Math.Abs(x) <= 1.0 ? 1.0/2.0 : 0.0 + + + + + Triangular Kernel: + x => Math.Abs(x) <= 1.0 ? (1.0-Math.Abs(x)) : 0.0 + + + + + A hybrid Monte Carlo sampler for multivariate distributions. + + + + + Number of parameters in the density function. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of different components of the + momentum. + + + + + Gets or sets the standard deviations used in the sampling of different components of the + momentum. + + When the length of pSdv is not the same as Length. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + 1 using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the a random number generator provided by the user. + A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The components of the momentum will be sampled from a normal distribution with standard deviations + given by pSdv. This constructor will set the burn interval, the method used for + numerical differentiation and the random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviations of the normal distributions that are used to sample + the components of the momentum. + Random number generator used for sampling the momentum. + The method used for numerical differentiation. + When the number of burnInterval iteration is negative. + When the length of pSdv is not the same as x0. + + + + Initialize parameters. + + The current location of the sampler. + + + + Checking that the location and the momentum are of the same dimension and that each component is positive. + + The standard deviations used for sampling the momentum. + When the length of pSdv is not the same as Length or if any + component is negative. + When pSdv is null. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the gradient. Uses a simple three point estimation. + + Function which the gradient is to be evaluated. + The location where the gradient is to be evaluated. + The gradient of the function at the point x. + + + + The Hybrid (also called Hamiltonian) Monte Carlo produces samples from distribution P using a set + of Hamiltonian equations to guide the sampling process. It uses the negative of the log density as + a potential energy, and a randomly generated momentum to set up a Hamiltonian system, which is then used + to sample the distribution. This can result in a faster convergence than the random walk Metropolis sampler + (). + + The type of samples this sampler produces. + + + + The delegate type that defines a derivative evaluated at a certain point. + + Function to be differentiated. + Value where the derivative is computed. + + + + Evaluates the energy function of the target distribution. + + + + + The current location of the sampler. + + + + + The number of burn iterations between two samples. + + + + + The size of each step in the Hamiltonian equation. + + + + + The number of iterations in the Hamiltonian equation. + + + + + The algorithm used for differentiation. + + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the number of iterations in the Hamiltonian equation. + + When frog leap steps is negative or zero. + + + + Gets or sets the size of each step in the Hamiltonian equation. + + When step size is negative or zero. + + + + Constructs a new Hybrid Monte Carlo sampler. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + Random number generator used for sampling the momentum. + The method used for differentiation. + When the number of burnInterval iteration is negative. + When either x0, pdfLnP or diff is null. + + + + Returns a sample from the distribution P. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Method used to update the sample location. Used in the end of the loop. + + The old energy. + The old gradient/derivative of the energy. + The new sample. + The new gradient/derivative of the energy. + The new energy. + The difference between the old Hamiltonian and new Hamiltonian. Use to determine + if an update should take place. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Method for doing dot product. + + First vector/scalar in the product. + Second vector/scalar in the product. + + + + Method for adding, multiply the second vector/scalar by factor and then + add it to the first vector/scalar. + + First vector/scalar. + Scalar factor multiplying by the second vector/scalar. + Second vector/scalar. + + + + Multiplying the second vector/scalar by factor and then subtract it from + the first vector/scalar. + + First vector/scalar. + Scalar factor to be multiplied to the second vector/scalar. + Second vector/scalar. + + + + Method for sampling a random momentum. + + Momentum to be randomized. + + + + The Hamiltonian equations that is used to produce the new sample. + + + + + Method to compute the Hamiltonian used in the method. + + The momentum. + The energy. + Hamiltonian=E+p.p/2 + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than or equal to zero. + Throws when value is negative. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than to zero. + Throws when value is negative or zero. + + + + Method to check and set a quantity to a non-negative value. + + Proposed value to be checked. + Returns value if it is greater than zero. + Throws when value is negative or zero. + + + + Provides utilities to analysis the convergence of a set of samples from + a . + + + + + Computes the auto correlations of a series evaluated by a function f. + + The series for computing the auto correlation. + The lag in the series + The function used to evaluate the series. + The auto correlation. + Throws if lag is zero or if lag is + greater than or equal to the length of Series. + + + + Computes the effective size of the sample when evaluated by a function f. + + The samples. + The function use for evaluating the series. + The effective size when auto correlation is taken into account. + + + + A method which samples datapoints from a proposal distribution. The implementation of this sampler + is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it doesn't take any parameters; it samples random + variables from the whole domain. + + The type of the datapoints. + A sample from the proposal distribution. + + + + A method which samples datapoints from a proposal distribution given an initial sample. The implementation + of this sampler is stateless: no variables are saved between two calls to Sample. This proposal is different from + in that it samples locally around an initial point. In other words, it + makes a small local move rather than producing a global sample from the proposal. + + The type of the datapoints. + The initial sample. + A sample from the proposal distribution. + + + + A function which evaluates a density. + + The type of data the distribution is over. + The sample we want to evaluate the density for. + + + + A function which evaluates a log density. + + The type of data the distribution is over. + The sample we want to evaluate the log density for. + + + + A function which evaluates the log of a transition kernel probability. + + The type for the space over which this transition kernel is defined. + The new state in the transition. + The previous state in the transition. + The log probability of the transition. + + + + The interface which every sampler must implement. + + The type of samples this sampler produces. + + + + The random number generator for this class. + + + + + Keeps track of the number of accepted samples. + + + + + Keeps track of the number of calls to the proposal sampler. + + + + + Initializes a new instance of the class. + + Thread safe instances are two and half times slower than non-thread + safe classes. + + + + Gets or sets the random number generator. + + When the random number generator is null. + + + + Returns one sample. + + + + + Returns a number of samples. + + The number of samples we want. + An array of samples. + + + + Gets the acceptance rate of the sampler. + + + + + Metropolis-Hastings sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis-Hastings sampling doesn't require that the + proposal distribution Q is symmetric in comparison to . It does need to + be able to evaluate the proposal sampler's log density though. All densities are required to be in log space. + + The Metropolis-Hastings sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the target distribution. + + + + + Evaluates the log transition probability for the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis-Hastings sampler using the default random number generator. This + constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + The log transition probability for the proposal distribution. + A method that samples from the proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Metropolis sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P. Metropolis sampling requires that the proposal + distribution Q is symmetric. All densities are required to be in log space. + + The Metropolis sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + The type of samples this sampler produces. + + + + Evaluates the log density function of the sampling distribution. + + + + + A function which samples from a proposal distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + Constructs a new Metropolis sampler using the default random number generator. + + The initial sample. + The log density of the distribution we want to sample from. + A method that samples from the symmetric proposal distribution. + The number of iterations in between returning samples. + When the number of burnInterval iteration is negative. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Rejection sampling produces samples from distribution P by sampling from a proposal distribution Q + and accepting/rejecting based on the density of P and Q. The density of P and Q don't need to + to be normalized, but we do need that for each x, P(x) < Q(x). + + The type of samples this sampler produces. + + + + Evaluates the density function of the sampling distribution. + + + + + Evaluates the density function of the proposal distribution. + + + + + A function which samples from a proposal distribution. + + + + + Constructs a new rejection sampler using the default random number generator. + + The density of the distribution we want to sample from. + The density of the proposal distribution. + A method that samples from the proposal distribution. + + + + Returns a sample from the distribution P. + + When the algorithms detects that the proposal + distribution doesn't upper bound the target distribution. + + + + A hybrid Monte Carlo sampler for univariate distributions. + + + + + Distribution to sample momentum from. + + + + + Standard deviations used in the sampling of the + momentum. + + + + + Gets or sets the standard deviation used in the sampling of the + momentum. + + When standard deviation is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using the default random + number generator. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a univariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + specified by pSdv using a random + number generator provided by the user. A three point estimation will be used for differentiation. + This constructor will set the burn interval. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + Random number generator used to sample the momentum. + When the number of burnInterval iteration is negative. + + + + Constructs a new Hybrid Monte Carlo sampler for a multivariate probability distribution. + The momentum will be sampled from a normal distribution with standard deviation + given by pSdv using a random + number generator provided by the user. This constructor will set both the burn interval and the method used for + numerical differentiation. + + The initial sample. + The log density of the distribution we want to sample from. + Number frog leap simulation steps. + Size of the frog leap simulation steps. + The number of iterations in between returning samples. + The standard deviation of the normal distribution that is used to sample + the momentum. + The method used for numerical differentiation. + Random number generator used for sampling the momentum. + When the number of burnInterval iteration is negative. + + + + Use for copying objects in the Burn method. + + The source of copying. + A copy of the source object. + + + + Use for creating temporary objects in the Burn method. + + An object of type T. + + + + + + + + + + + + + Samples the momentum from a normal distribution. + + The momentum to be randomized. + + + + The default method used for computing the derivative. Uses a simple three point estimation. + + Function for which the derivative is to be evaluated. + The location where the derivative is to be evaluated. + The derivative of the function at the point x. + + + + Slice sampling produces samples from distribution P by uniformly sampling from under the pdf of P using + a technique described in "Slice Sampling", R. Neal, 2003. All densities are required to be in log space. + + The slice sampler is a stateful sampler. It keeps track of where it currently is in the domain + of the distribution P. + + + + + Evaluates the log density function of the target distribution. + + + + + The current location of the sampler. + + + + + The log density at the current location. + + + + + The number of burn iterations between two samples. + + + + + The scale of the slice sampler. + + + + + Constructs a new Slice sampler using the default random + number generator. The burn interval will be set to 0. + + The initial sample. + The density of the distribution we want to sample from. + The scale factor of the slice sampler. + When the scale of the slice sampler is not positive. + + + + Constructs a new slice sampler using the default random number generator. It + will set the number of burnInterval iterations and run a burnInterval phase. + + The initial sample. + The density of the distribution we want to sample from. + The number of iterations in between returning samples. + The scale factor of the slice sampler. + When the number of burnInterval iteration is negative. + When the scale of the slice sampler is not positive. + + + + Gets or sets the number of iterations in between returning samples. + + When burn interval is negative. + + + + Gets or sets the scale of the slice sampler. + + + + + This method runs the sampler for a number of iterations without returning a sample + + + + + Returns a sample from the distribution P. + + + + + Running statistics over a window of data, allows updating by adding values. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + Replace ties with their mean (non-integer ranks). Default. + + + Replace ties with their minimum (typical sports ranking). + + + Replace ties with their maximum. + + + Permutation with increasing values at each index of ties. + + + + Running statistics accumulator, allows updating by adding values + or by combining two accumulators. + + + This type declares a DataContract for out of the box ephemeral serialization + with engines like DataContractSerializer, Protocol Buffers and FsPickler, + but does not guarantee any compatibility between versions. + It is not recommended to rely on this mechanism for durable persistence. + + + + + Gets the total number of samples. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Evaluates the population skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + + + + Evaluates the population kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + + + + Update the running statistics by adding another observed sample (in-place). + + + + + Update the running statistics by adding a sequence of observed sample (in-place). + + + + + Create a new running statistics over the combined samples of two existing running statistics. + + + + + Statistics operating on an array already sorted ascendingly. + + + + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Returns the smallest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the largest value from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + + + + Returns the order statistic (order 1..N) from the sorted data array (ascending). + + Sample array, must be sorted ascendingly. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Estimates the median value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the p-Percentile value from the sorted data array (ascending). + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the first quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the third quartile value from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the inter-quartile range from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the sorted data array (ascending). + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + + R-8, SciPy-(1/3,1/3): + Linear interpolation of the approximate medians for order statistics. + When tau < (2/3) / (N + 1/3), use x1. When tau >= (N - 1/3) / (N + 1/3), use xN. + + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified + by 4 parameters a, b, c and d, consistent with Mathematica. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + a-parameter + b-parameter + c-parameter + d-parameter + + + + Estimates the tau-th quantile from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + Sample array, must be sorted ascendingly. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the sorted data array (ascending). + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the quantile tau from the sorted data array (ascending). + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the sorted data array (ascending). + The rank definition can be specified to be compatible + with an existing system. + + + + + Extension methods to return basic statistics on set of data. + + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the maximum absolute value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The maximum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the minimum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Returns the maximum magnitude and phase value in the sample data. + Returns NaN if data is empty or if any entry is NaN. + + The sample data. + The minimum value in the sample data. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the sample mean, an estimate of the population mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + The mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the geometric mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the geometric mean of. + The geometric mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Evaluates the harmonic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the harmonic mean of. + The harmonic mean of the sample. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the variance from the provided full population. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + + + + Evaluates the standard deviation from the provided full population. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population skewness from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + + The full population data. + + + + Evaluates the skewness from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population kurtosis from the provided samples. + Uses a normalizer (Bessel's correction; type 2). + Returns NaN if data has less than four entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + + The full population data. + + + + Evaluates the kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + Returns NaN if data has less than three entries or if any entry is NaN. + Null-entries are ignored. + + The full population data. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population variance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for variance if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the sample mean and the unbiased population standard deviation from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or if any entry is NaN and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + The data to calculate the mean of. + The mean of the sample. + + + + Estimates the unbiased population skewness and kurtosis from the provided samples in a single pass. + Uses a normalizer (Bessel's correction; type 2). + + A subset of samples, sampled from the full population. + + + + Evaluates the skewness and kurtosis from the full population. + Does not use a normalizer and would thus be biased if applied to a subset (type 1). + + The full population data. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Estimates the unbiased population covariance from the provided samples. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + Null-entries are ignored. + + A subset of samples, sampled from the full population. + A subset of samples, sampled from the full population. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + The full population data. + The full population data. + + + + Evaluates the population covariance from the provided full populations. + On a dataset of size N will use an N normalize and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The full population data. + The full population data. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + + The data to calculate the RMS of. + + + + Evaluates the root mean square (RMS) also known as quadratic mean. + Returns NaN if data is empty or if any entry is NaN. + Null-entries are ignored. + + The data to calculate the mean of. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the sample median from the provided samples (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the tau-th quantile from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile definition, to choose what product/definition it should be consistent with + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + Percentile selector, between 0 and 100 (inclusive). + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the p-Percentile value from the provided samples. + If a non-integer Percentile is needed, use Quantile instead. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the first quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the third quartile value from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates the inter-quartile range from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Estimates {min, lower-quantile, median, upper-quantile, max} from the provided samples. + Approximately median-unbiased regardless of the sample distribution (R8). + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + One-based order of the statistic, must be between 1 and N (inclusive). + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Returns the order statistic (order 1..N) from the provided samples. + + The data sample sequence. + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Evaluates the rank of each entry of the provided samples. + The rank definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Quantile value. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the quantile tau from the provided samples. + The tau-th quantile is the data value where the cumulative distribution + function crosses tau. The quantile definition can be specified to be compatible + with an existing system. + + The data sample sequence. + Rank definition, to choose how ties should be handled and what product/definition it should be consistent with + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + The value where to estimate the CDF at. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical cumulative distribution function (CDF) at x from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + Quantile selector, between 0.0 and 1.0 (inclusive). + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Estimates the empirical inverse CDF at tau from the provided samples. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + + The data sample sequence. + + + + Calculates the entropy of a stream of double values in bits. + Returns NaN if any of the values in the stream are NaN. + Null-entries are ignored. + + The data sample sequence. + + + + Evaluates the sample mean over a moving window, for each samples. + Returns NaN if no data is empty or if any entry is NaN. + + The sample stream to calculate the mean of. + The number of last samples to consider. + + + + Statistics operating on an IEnumerable in a single pass, without keeping the full data in memory. + Can be used in a streaming way, e.g. on large datasets not fitting into memory. + + + + + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the smallest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Returns the largest absolute value from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the geometric mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the harmonic mean of the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population variance from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Evaluates the population standard deviation from the full population provided as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population variance from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for variance if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the arithmetic sample mean and the unbiased population standard deviation from the provided samples as enumerable sequence, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN for mean if data is empty or any entry is NaN, and NaN for standard deviation if data has less than two entries or if any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Estimates the unbiased population covariance from the provided two sample enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N-1 normalizer (Bessel's correction). + Returns NaN if data has less than two entries or if any entry is NaN. + + First sample stream. + Second sample stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Evaluates the population covariance from the full population provided as two enumerable sequences, in a single pass without memoization. + On a dataset of size N will use an N normalizer and would thus be biased if applied to a subset. + Returns NaN if data is empty or if any entry is NaN. + + First population stream. + Second population stream. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Estimates the root mean square (RMS) also known as quadratic mean from the enumerable, in a single pass without memoization. + Returns NaN if data is empty or any entry is NaN. + + Sample stream, no sorting is assumed. + + + + Calculates the entropy of a stream of double values. + Returns NaN if any of the values in the stream are NaN. + + The input stream to evaluate. + + + + + Used to simplify parallel code, particularly between the .NET 4.0 and Silverlight Code. + + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The body to be invoked for each iteration range. + + + + Executes a for loop in which iterations may run in parallel. + + The start index, inclusive. + The end index, exclusive. + The partition size for splitting work into smaller pieces. + The body to be invoked for each iteration range. + + + + Executes each of the provided actions inside a discrete, asynchronous task. + + An array of actions to execute. + The actions array contains a null element. + At least one invocation of the actions threw an exception. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + The selected value. + + + + Selects an item (such as Max or Min). + + Starting index of the loop. + Ending index of the loop + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Selects an item (such as Max or Min). + + The array to iterate over. + The function to select items over a subset. + The function to select the item of selection from the subsets. + Default result of the reduce function on an empty set. + The selected value. + + + + Double-precision trigonometry toolkit. + + + + + Constant to convert a degree to grad. + + + + + Converts a degree (360-periodic) angle to a grad (400-periodic) angle. + + The degree to convert. + The converted grad angle. + + + + Converts a degree (360-periodic) angle to a radian (2*Pi-periodic) angle. + + The degree to convert. + The converted radian angle. + + + + Converts a grad (400-periodic) angle to a degree (360-periodic) angle. + + The grad to convert. + The converted degree. + + + + Converts a grad (400-periodic) angle to a radian (2*Pi-periodic) angle. + + The grad to convert. + The converted radian. + + + + Converts a radian (2*Pi-periodic) angle to a degree (360-periodic) angle. + + The radian to convert. + The converted degree. + + + + Converts a radian (2*Pi-periodic) angle to a grad (400-periodic) angle. + + The radian to convert. + The converted grad. + + + + Normalized Sinc function. sinc(x) = sin(pi*x)/(pi*x). + + + + + Trigonometric Sine of an angle in radian, or opposite / hypotenuse. + + The angle in radian. + The sine of the radian angle. + + + + Trigonometric Sine of a Complex number. + + The complex value. + The sine of the complex number. + + + + Trigonometric Cosine of an angle in radian, or adjacent / hypotenuse. + + The angle in radian. + The cosine of an angle in radian. + + + + Trigonometric Cosine of a Complex number. + + The complex value. + The cosine of a complex number. + + + + Trigonometric Tangent of an angle in radian, or opposite / adjacent. + + The angle in radian. + The tangent of the radian angle. + + + + Trigonometric Tangent of a Complex number. + + The complex value. + The tangent of the complex number. + + + + Trigonometric Cotangent of an angle in radian, or adjacent / opposite. Reciprocal of the tangent. + + The angle in radian. + The cotangent of an angle in radian. + + + + Trigonometric Cotangent of a Complex number. + + The complex value. + The cotangent of the complex number. + + + + Trigonometric Secant of an angle in radian, or hypotenuse / adjacent. Reciprocal of the cosine. + + The angle in radian. + The secant of the radian angle. + + + + Trigonometric Secant of a Complex number. + + The complex value. + The secant of the complex number. + + + + Trigonometric Cosecant of an angle in radian, or hypotenuse / opposite. Reciprocal of the sine. + + The angle in radian. + Cosecant of an angle in radian. + + + + Trigonometric Cosecant of a Complex number. + + The complex value. + The cosecant of a complex number. + + + + Trigonometric principal Arc Sine in radian + + The opposite for a unit hypotenuse (i.e. opposite / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Sine of this Complex number. + + The complex value. + The arc sine of a complex number. + + + + Trigonometric principal Arc Cosine in radian + + The adjacent for a unit hypotenuse (i.e. adjacent / hypotenuse). + The angle in radian. + + + + Trigonometric principal Arc Cosine of this Complex number. + + The complex value. + The arc cosine of a complex number. + + + + Trigonometric principal Arc Tangent in radian + + The opposite for a unit adjacent (i.e. opposite / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Tangent of this Complex number. + + The complex value. + The arc tangent of a complex number. + + + + Trigonometric principal Arc Cotangent in radian + + The adjacent for a unit opposite (i.e. adjacent / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cotangent of this Complex number. + + The complex value. + The arc cotangent of a complex number. + + + + Trigonometric principal Arc Secant in radian + + The hypotenuse for a unit adjacent (i.e. hypotenuse / adjacent). + The angle in radian. + + + + Trigonometric principal Arc Secant of this Complex number. + + The complex value. + The arc secant of a complex number. + + + + Trigonometric principal Arc Cosecant in radian + + The hypotenuse for a unit opposite (i.e. hypotenuse / opposite). + The angle in radian. + + + + Trigonometric principal Arc Cosecant of this Complex number. + + The complex value. + The arc cosecant of a complex number. + + + + Hyperbolic Sine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic sine of the angle. + + + + Hyperbolic Sine of a Complex number. + + The complex value. + The hyperbolic sine of a complex number. + + + + Hyperbolic Cosine + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic Cosine of the angle. + + + + Hyperbolic Cosine of a Complex number. + + The complex value. + The hyperbolic cosine of a complex number. + + + + Hyperbolic Tangent in radian + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic tangent of the angle. + + + + Hyperbolic Tangent of a Complex number. + + The complex value. + The hyperbolic tangent of a complex number. + + + + Hyperbolic Cotangent + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cotangent of the angle. + + + + Hyperbolic Cotangent of a Complex number. + + The complex value. + The hyperbolic cotangent of a complex number. + + + + Hyperbolic Secant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic secant of the angle. + + + + Hyperbolic Secant of a Complex number. + + The complex value. + The hyperbolic secant of a complex number. + + + + Hyperbolic Cosecant + + The hyperbolic angle, i.e. the area of the hyperbolic sector. + The hyperbolic cosecant of the angle. + + + + Hyperbolic Cosecant of a Complex number. + + The complex value. + The hyperbolic cosecant of a complex number. + + + + Hyperbolic Area Sine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Sine of this Complex number. + + The complex value. + The hyperbolic arc sine of a complex number. + + + + Hyperbolic Area Cosine + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosine of this Complex number. + + The complex value. + The hyperbolic arc cosine of a complex number. + + + + Hyperbolic Area Tangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Tangent of this Complex number. + + The complex value. + The hyperbolic arc tangent of a complex number. + + + + Hyperbolic Area Cotangent + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cotangent of this Complex number. + + The complex value. + The hyperbolic arc cotangent of a complex number. + + + + Hyperbolic Area Secant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Secant of this Complex number. + + The complex value. + The hyperbolic arc secant of a complex number. + + + + Hyperbolic Area Cosecant + + The real value. + The hyperbolic angle, i.e. the area of its hyperbolic sector. + + + + Hyperbolic Area Cosecant of this Complex number. + + The complex value. + The hyperbolic arc cosecant of a complex number. + + + + Hamming window. Named after Richard Hamming. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hamming window. Named after Richard Hamming. + Periodic version, useful e.g. for FFT purposes. + + + + + Hann window. Named after Julius von Hann. + Symmetric version, useful e.g. for filter design purposes. + + + + + Hann window. Named after Julius von Hann. + Periodic version, useful e.g. for FFT purposes. + + + + + Cosine window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Cosine window. + Periodic version, useful e.g. for FFT purposes. + + + + + Lanczos window. + Symmetric version, useful e.g. for filter design purposes. + + + + + Lanczos window. + Periodic version, useful e.g. for FFT purposes. + + + + + Gauss window. + + + + + Blackman window. + + + + + Blackman-Harris window. + + + + + Blackman-Nuttall window. + + + + + Bartlett window. + + + + + Bartlett-Hann window. + + + + + Nuttall window. + + + + + Flat top window. + + + + + Uniform rectangular (Dirichlet) window. + + + + + Triangular window. + + + + + Tukey tapering window. A rectangular window bounded + by half a cosine window on each side. + + Width of the window + Fraction of the window occupied by the cosine parts + +
+